Jan 21 00:09:04 crc systemd[1]: Starting Kubernetes Kubelet... Jan 21 00:09:04 crc kubenswrapper[5118]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 21 00:09:04 crc kubenswrapper[5118]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 21 00:09:04 crc kubenswrapper[5118]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 21 00:09:04 crc kubenswrapper[5118]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 21 00:09:04 crc kubenswrapper[5118]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 21 00:09:04 crc kubenswrapper[5118]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.779718 5118 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.783709 5118 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.783730 5118 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.783736 5118 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.783743 5118 feature_gate.go:328] unrecognized feature gate: PinnedImages Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.783749 5118 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.783754 5118 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.783759 5118 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.783764 5118 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.783768 5118 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.783773 5118 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.783779 5118 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.783783 5118 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.783788 5118 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.783792 5118 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.783797 5118 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.783802 5118 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.783807 5118 feature_gate.go:328] unrecognized feature gate: Example2 Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.783811 5118 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.783816 5118 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.783820 5118 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.783825 5118 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.783829 5118 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.783833 5118 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.783837 5118 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.783841 5118 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.783847 5118 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.783852 5118 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.783857 5118 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.783885 5118 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.783890 5118 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.783894 5118 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.783899 5118 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.783903 5118 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.783907 5118 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.783912 5118 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.783916 5118 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.783920 5118 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.783925 5118 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.783929 5118 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.783933 5118 feature_gate.go:328] unrecognized feature gate: Example Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.783937 5118 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.783942 5118 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.783946 5118 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.783950 5118 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.783954 5118 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.783958 5118 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.783963 5118 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.783967 5118 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.783971 5118 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.783975 5118 feature_gate.go:328] unrecognized feature gate: OVNObservability Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.783979 5118 feature_gate.go:328] unrecognized feature gate: DualReplica Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.783984 5118 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.783988 5118 feature_gate.go:328] unrecognized feature gate: GatewayAPI Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.783992 5118 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.783997 5118 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.784001 5118 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.784008 5118 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.784012 5118 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.784016 5118 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.784020 5118 feature_gate.go:328] unrecognized feature gate: InsightsConfig Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.784024 5118 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.784028 5118 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.784033 5118 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.784037 5118 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.784041 5118 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.784045 5118 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.784050 5118 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.784057 5118 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.784064 5118 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.784069 5118 feature_gate.go:328] unrecognized feature gate: SignatureStores Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.784073 5118 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.784077 5118 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.784082 5118 feature_gate.go:328] unrecognized feature gate: NewOLM Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.784086 5118 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.784090 5118 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.784094 5118 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.784100 5118 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.784104 5118 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.784108 5118 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.784112 5118 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.784116 5118 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.784121 5118 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.784125 5118 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.784132 5118 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.784138 5118 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.784142 5118 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.784740 5118 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.784749 5118 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.784759 5118 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.784763 5118 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.784767 5118 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.784771 5118 feature_gate.go:328] unrecognized feature gate: DualReplica Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.784776 5118 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.784780 5118 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.784784 5118 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.784788 5118 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.784792 5118 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.784796 5118 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.784802 5118 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.784806 5118 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.784814 5118 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.784819 5118 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.784823 5118 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.784828 5118 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.784833 5118 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.784837 5118 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.784842 5118 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.784846 5118 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.784850 5118 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.784854 5118 feature_gate.go:328] unrecognized feature gate: NewOLM Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.784858 5118 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.784862 5118 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.784867 5118 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.784871 5118 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.784875 5118 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.784879 5118 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.784883 5118 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.784887 5118 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.784891 5118 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.784896 5118 feature_gate.go:328] unrecognized feature gate: PinnedImages Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.784901 5118 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.784905 5118 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.784910 5118 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.784914 5118 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.784918 5118 feature_gate.go:328] unrecognized feature gate: Example2 Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.784922 5118 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.784927 5118 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.784931 5118 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.784935 5118 feature_gate.go:328] unrecognized feature gate: Example Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.784939 5118 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.784943 5118 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.784949 5118 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.784955 5118 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.784960 5118 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.784964 5118 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.784970 5118 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.784976 5118 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.784981 5118 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.784986 5118 feature_gate.go:328] unrecognized feature gate: OVNObservability Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.784991 5118 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.784995 5118 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.785000 5118 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.785004 5118 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.785008 5118 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.785012 5118 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.785017 5118 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.785021 5118 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.785025 5118 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.785030 5118 feature_gate.go:328] unrecognized feature gate: InsightsConfig Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.785034 5118 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.785038 5118 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.785043 5118 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.785047 5118 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.785051 5118 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.785055 5118 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.785060 5118 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.785064 5118 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.785069 5118 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.785073 5118 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.785077 5118 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.785081 5118 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.785085 5118 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.785089 5118 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.785093 5118 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.785100 5118 feature_gate.go:328] unrecognized feature gate: SignatureStores Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.785105 5118 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.785109 5118 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.785113 5118 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.785119 5118 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.785123 5118 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.785127 5118 feature_gate.go:328] unrecognized feature gate: GatewayAPI Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.785131 5118 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.785558 5118 flags.go:64] FLAG: --address="0.0.0.0" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.785572 5118 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.785582 5118 flags.go:64] FLAG: --anonymous-auth="true" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.785589 5118 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.785595 5118 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.785601 5118 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.785607 5118 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.785615 5118 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.785620 5118 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.785624 5118 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.785630 5118 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.785635 5118 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.785640 5118 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.785645 5118 flags.go:64] FLAG: --cgroup-root="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.785650 5118 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.785654 5118 flags.go:64] FLAG: --client-ca-file="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.785659 5118 flags.go:64] FLAG: --cloud-config="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.785663 5118 flags.go:64] FLAG: --cloud-provider="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.785668 5118 flags.go:64] FLAG: --cluster-dns="[]" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.785676 5118 flags.go:64] FLAG: --cluster-domain="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.785681 5118 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.785686 5118 flags.go:64] FLAG: --config-dir="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.785691 5118 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.785696 5118 flags.go:64] FLAG: --container-log-max-files="5" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.785712 5118 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.785718 5118 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.785723 5118 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.785728 5118 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.785734 5118 flags.go:64] FLAG: --contention-profiling="false" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.785739 5118 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.785745 5118 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.785751 5118 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.785757 5118 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.785764 5118 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.785769 5118 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.785774 5118 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.785779 5118 flags.go:64] FLAG: --enable-load-reader="false" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.785784 5118 flags.go:64] FLAG: --enable-server="true" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.785788 5118 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.785794 5118 flags.go:64] FLAG: --event-burst="100" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.785799 5118 flags.go:64] FLAG: --event-qps="50" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.785804 5118 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.785809 5118 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.785813 5118 flags.go:64] FLAG: --eviction-hard="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.785820 5118 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.785825 5118 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.785830 5118 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.785834 5118 flags.go:64] FLAG: --eviction-soft="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.785839 5118 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.785843 5118 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.785848 5118 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.785853 5118 flags.go:64] FLAG: --experimental-mounter-path="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.785857 5118 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.785862 5118 flags.go:64] FLAG: --fail-swap-on="true" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.785866 5118 flags.go:64] FLAG: --feature-gates="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.785872 5118 flags.go:64] FLAG: --file-check-frequency="20s" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.785879 5118 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.785884 5118 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.785890 5118 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.785895 5118 flags.go:64] FLAG: --healthz-port="10248" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.785900 5118 flags.go:64] FLAG: --help="false" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.785904 5118 flags.go:64] FLAG: --hostname-override="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.785909 5118 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.785913 5118 flags.go:64] FLAG: --http-check-frequency="20s" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.785918 5118 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.785922 5118 flags.go:64] FLAG: --image-credential-provider-config="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.785927 5118 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.785931 5118 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.785936 5118 flags.go:64] FLAG: --image-service-endpoint="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.785940 5118 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.785944 5118 flags.go:64] FLAG: --kube-api-burst="100" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.785949 5118 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.785954 5118 flags.go:64] FLAG: --kube-api-qps="50" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.785959 5118 flags.go:64] FLAG: --kube-reserved="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.785963 5118 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.785968 5118 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.785975 5118 flags.go:64] FLAG: --kubelet-cgroups="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.785979 5118 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.785984 5118 flags.go:64] FLAG: --lock-file="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.785988 5118 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.785993 5118 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.785997 5118 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.786005 5118 flags.go:64] FLAG: --log-json-split-stream="false" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.786010 5118 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.786014 5118 flags.go:64] FLAG: --log-text-split-stream="false" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.786019 5118 flags.go:64] FLAG: --logging-format="text" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.786024 5118 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.786029 5118 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.786036 5118 flags.go:64] FLAG: --manifest-url="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.786041 5118 flags.go:64] FLAG: --manifest-url-header="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.786047 5118 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.786052 5118 flags.go:64] FLAG: --max-open-files="1000000" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.786058 5118 flags.go:64] FLAG: --max-pods="110" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.786064 5118 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.786068 5118 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.786073 5118 flags.go:64] FLAG: --memory-manager-policy="None" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.786077 5118 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.786082 5118 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.786086 5118 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.786091 5118 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhel" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.786105 5118 flags.go:64] FLAG: --node-status-max-images="50" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.786110 5118 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.786115 5118 flags.go:64] FLAG: --oom-score-adj="-999" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.786119 5118 flags.go:64] FLAG: --pod-cidr="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.786124 5118 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2b30e70040205c2536d01ae5c850be1ed2d775cf13249e50328e5085777977" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.786134 5118 flags.go:64] FLAG: --pod-manifest-path="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.786138 5118 flags.go:64] FLAG: --pod-max-pids="-1" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.786144 5118 flags.go:64] FLAG: --pods-per-core="0" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.786151 5118 flags.go:64] FLAG: --port="10250" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.786178 5118 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.786184 5118 flags.go:64] FLAG: --provider-id="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.786189 5118 flags.go:64] FLAG: --qos-reserved="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.786194 5118 flags.go:64] FLAG: --read-only-port="10255" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.786199 5118 flags.go:64] FLAG: --register-node="true" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.786204 5118 flags.go:64] FLAG: --register-schedulable="true" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.786209 5118 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.786223 5118 flags.go:64] FLAG: --registry-burst="10" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.786227 5118 flags.go:64] FLAG: --registry-qps="5" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.786232 5118 flags.go:64] FLAG: --reserved-cpus="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.786237 5118 flags.go:64] FLAG: --reserved-memory="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.786244 5118 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.786249 5118 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.786254 5118 flags.go:64] FLAG: --rotate-certificates="false" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.786259 5118 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.786265 5118 flags.go:64] FLAG: --runonce="false" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.786269 5118 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.786274 5118 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.786279 5118 flags.go:64] FLAG: --seccomp-default="false" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.786284 5118 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.786289 5118 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.786293 5118 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.786298 5118 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.786303 5118 flags.go:64] FLAG: --storage-driver-password="root" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.786308 5118 flags.go:64] FLAG: --storage-driver-secure="false" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.786312 5118 flags.go:64] FLAG: --storage-driver-table="stats" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.786317 5118 flags.go:64] FLAG: --storage-driver-user="root" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.786321 5118 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.786326 5118 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.786331 5118 flags.go:64] FLAG: --system-cgroups="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.786336 5118 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.786344 5118 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.786351 5118 flags.go:64] FLAG: --tls-cert-file="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.786356 5118 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.786363 5118 flags.go:64] FLAG: --tls-min-version="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.786367 5118 flags.go:64] FLAG: --tls-private-key-file="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.786372 5118 flags.go:64] FLAG: --topology-manager-policy="none" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.786376 5118 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.786381 5118 flags.go:64] FLAG: --topology-manager-scope="container" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.786386 5118 flags.go:64] FLAG: --v="2" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.786392 5118 flags.go:64] FLAG: --version="false" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.786398 5118 flags.go:64] FLAG: --vmodule="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.786405 5118 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.786412 5118 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.786538 5118 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.786544 5118 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.786551 5118 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.786557 5118 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.786568 5118 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.786573 5118 feature_gate.go:328] unrecognized feature gate: Example2 Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.786578 5118 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.786582 5118 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.786587 5118 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.786591 5118 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.786595 5118 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.786599 5118 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.786604 5118 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.786608 5118 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.786613 5118 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.786617 5118 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.786621 5118 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.786625 5118 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.786629 5118 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.786633 5118 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.786639 5118 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.786644 5118 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.786648 5118 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.786652 5118 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.786657 5118 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.786661 5118 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.786665 5118 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.786669 5118 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.786673 5118 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.786678 5118 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.786682 5118 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.786689 5118 feature_gate.go:328] unrecognized feature gate: Example Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.786693 5118 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.786698 5118 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.786702 5118 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.786706 5118 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.786710 5118 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.786715 5118 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.786720 5118 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.786726 5118 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.786731 5118 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.786735 5118 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.786740 5118 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.786744 5118 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.786749 5118 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.786753 5118 feature_gate.go:328] unrecognized feature gate: NewOLM Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.786757 5118 feature_gate.go:328] unrecognized feature gate: OVNObservability Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.786761 5118 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.786766 5118 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.786770 5118 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.786774 5118 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.786781 5118 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.786788 5118 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.786793 5118 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.786798 5118 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.786802 5118 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.786807 5118 feature_gate.go:328] unrecognized feature gate: SignatureStores Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.786811 5118 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.786815 5118 feature_gate.go:328] unrecognized feature gate: PinnedImages Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.786819 5118 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.786823 5118 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.786827 5118 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.786832 5118 feature_gate.go:328] unrecognized feature gate: GatewayAPI Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.786837 5118 feature_gate.go:328] unrecognized feature gate: DualReplica Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.786841 5118 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.786846 5118 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.786850 5118 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.786854 5118 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.786858 5118 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.786863 5118 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.786868 5118 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.786872 5118 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.786877 5118 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.786881 5118 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.786886 5118 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.786890 5118 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.786894 5118 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.786899 5118 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.786903 5118 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.786907 5118 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.786911 5118 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.786916 5118 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.786920 5118 feature_gate.go:328] unrecognized feature gate: InsightsConfig Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.786924 5118 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.786931 5118 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.786935 5118 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.787319 5118 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.796393 5118 server.go:530] "Kubelet version" kubeletVersion="v1.33.5" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.796469 5118 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.796531 5118 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.796540 5118 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.796544 5118 feature_gate.go:328] unrecognized feature gate: GatewayAPI Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.796548 5118 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.796552 5118 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.796556 5118 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.796560 5118 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.796564 5118 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.796567 5118 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.796572 5118 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.796580 5118 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.796584 5118 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.796588 5118 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.796592 5118 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.796597 5118 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.796602 5118 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.796607 5118 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.796611 5118 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.796615 5118 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.796621 5118 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.796626 5118 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.796630 5118 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.796635 5118 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.796639 5118 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.796644 5118 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.796649 5118 feature_gate.go:328] unrecognized feature gate: PinnedImages Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.796652 5118 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.796656 5118 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.796659 5118 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.796663 5118 feature_gate.go:328] unrecognized feature gate: InsightsConfig Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.796667 5118 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.796672 5118 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.796675 5118 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.796679 5118 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.796682 5118 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.796685 5118 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.796689 5118 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.796692 5118 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.796695 5118 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.796698 5118 feature_gate.go:328] unrecognized feature gate: SignatureStores Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.796702 5118 feature_gate.go:328] unrecognized feature gate: Example2 Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.796706 5118 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.796709 5118 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.796713 5118 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.796716 5118 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.796720 5118 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.796724 5118 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.796727 5118 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.796731 5118 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.796735 5118 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.796739 5118 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.796742 5118 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.796747 5118 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.796750 5118 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.796753 5118 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.796757 5118 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.796760 5118 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.796763 5118 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.796767 5118 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.796770 5118 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.796773 5118 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.796777 5118 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.796780 5118 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.796783 5118 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.796788 5118 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.796791 5118 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.796795 5118 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.796798 5118 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.796803 5118 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.796807 5118 feature_gate.go:328] unrecognized feature gate: DualReplica Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.796811 5118 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.796814 5118 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.796817 5118 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.796821 5118 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.796824 5118 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.796827 5118 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.796830 5118 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.796834 5118 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.796837 5118 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.796841 5118 feature_gate.go:328] unrecognized feature gate: OVNObservability Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.796844 5118 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.796848 5118 feature_gate.go:328] unrecognized feature gate: NewOLM Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.796854 5118 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.796858 5118 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.796863 5118 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.796868 5118 feature_gate.go:328] unrecognized feature gate: Example Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.796877 5118 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.797017 5118 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.797024 5118 feature_gate.go:328] unrecognized feature gate: InsightsConfig Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.797028 5118 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.797031 5118 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.797035 5118 feature_gate.go:328] unrecognized feature gate: DualReplica Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.797039 5118 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.797042 5118 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.797045 5118 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.797049 5118 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.797052 5118 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.797056 5118 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.797060 5118 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.797063 5118 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.797066 5118 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.797070 5118 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.797073 5118 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.797076 5118 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.797080 5118 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.797083 5118 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.797088 5118 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.797094 5118 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.797098 5118 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.797101 5118 feature_gate.go:328] unrecognized feature gate: GatewayAPI Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.797105 5118 feature_gate.go:328] unrecognized feature gate: Example2 Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.797109 5118 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.797112 5118 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.797115 5118 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.797122 5118 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.797126 5118 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.797129 5118 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.797135 5118 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.797139 5118 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.797144 5118 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.797148 5118 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.797151 5118 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.797172 5118 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.797177 5118 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.797181 5118 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.797184 5118 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.797187 5118 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.797190 5118 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.797194 5118 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.797198 5118 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.797202 5118 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.797206 5118 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.797209 5118 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.797212 5118 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.797215 5118 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.797219 5118 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.797223 5118 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.797227 5118 feature_gate.go:328] unrecognized feature gate: Example Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.797231 5118 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.797235 5118 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.797240 5118 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.797244 5118 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.797247 5118 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.797251 5118 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.797256 5118 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.797260 5118 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.797266 5118 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.797270 5118 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.797275 5118 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.797280 5118 feature_gate.go:328] unrecognized feature gate: PinnedImages Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.797285 5118 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.797289 5118 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.797294 5118 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.797298 5118 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.797303 5118 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.797307 5118 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.797310 5118 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.797315 5118 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.797318 5118 feature_gate.go:328] unrecognized feature gate: OVNObservability Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.797322 5118 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.797326 5118 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.797330 5118 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.797333 5118 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.797338 5118 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.797341 5118 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.797345 5118 feature_gate.go:328] unrecognized feature gate: SignatureStores Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.797348 5118 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.797351 5118 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.797355 5118 feature_gate.go:328] unrecognized feature gate: NewOLM Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.797358 5118 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.797361 5118 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.797365 5118 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Jan 21 00:09:04 crc kubenswrapper[5118]: W0121 00:09:04.797368 5118 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.797375 5118 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.797863 5118 server.go:962] "Client rotation is on, will bootstrap in background" Jan 21 00:09:04 crc kubenswrapper[5118]: E0121 00:09:04.800553 5118 bootstrap.go:266] "Unhandled Error" err="part of the existing bootstrap client certificate in /var/lib/kubelet/kubeconfig is expired: 2025-12-03 08:27:53 +0000 UTC" logger="UnhandledError" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.806142 5118 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.806267 5118 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.806773 5118 server.go:1019] "Starting client certificate rotation" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.807084 5118 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kube-apiserver-client-kubelet" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.807142 5118 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.812824 5118 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 21 00:09:04 crc kubenswrapper[5118]: E0121 00:09:04.814436 5118 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.129.56.4:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.814785 5118 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.821738 5118 log.go:25] "Validated CRI v1 runtime API" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.838148 5118 log.go:25] "Validated CRI v1 image API" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.839938 5118 server.go:1452] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.842007 5118 fs.go:135] Filesystem UUIDs: map[19e76f87-96b8-4794-9744-0b33dca22d5b:/dev/vda3 2026-01-21-00-02-58-00:/dev/sr0 5eb7c122-420e-4494-80ec-41664070d7b6:/dev/vda4 7B77-95E7:/dev/vda2] Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.842037 5118 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:45 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:31 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:44 fsType:tmpfs blockSize:0} composefs_0-33:{mountpoint:/ major:0 minor:33 fsType:overlay blockSize:0}] Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.860049 5118 manager.go:217] Machine: {Timestamp:2026-01-21 00:09:04.858347366 +0000 UTC m=+0.182594414 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33649934336 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:80bc4fba336e4ca1bc9d28a8be52a356 SystemUUID:78a64d73-f919-4466-a9b9-ec34ac96c5c7 BootID:134a100e-afd8-41bd-8bdc-3d8d9cbfad99 Filesystems:[{Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:45 Capacity:3364990976 Type:vfs Inodes:821531 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:44 Capacity:1073741824 Type:vfs Inodes:4107658 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16824967168 Type:vfs Inodes:4107658 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6729990144 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:composefs_0-33 DeviceMajor:0 DeviceMinor:33 Capacity:6545408 Type:vfs Inodes:18446744073709551615 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:31 Capacity:16824967168 Type:vfs Inodes:1048576 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:a3:90:af Speed:0 Mtu:1500} {Name:br-int MacAddress:b2:a9:9f:57:07:84 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:a3:90:af Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:fb:63:2f Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:75:ce:a7 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:fe:b5:a2 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:cf:17:5b Speed:-1 Mtu:1496} {Name:eth10 MacAddress:9a:14:fb:24:d7:c6 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:fe:95:58:d3:4e:4d Speed:0 Mtu:1500} {Name:tap0 MacAddress:5a:94:ef:e4:0c:ee Speed:10 Mtu:1500}] Topology:[{Id:0 Memory:33649934336 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.860424 5118 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.860610 5118 manager.go:233] Version: {KernelVersion:5.14.0-570.57.1.el9_6.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 9.6.20251021-0 (Plow) DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.861815 5118 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.861898 5118 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.862086 5118 topology_manager.go:138] "Creating topology manager with none policy" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.862097 5118 container_manager_linux.go:306] "Creating device plugin manager" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.862121 5118 manager.go:141] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.862309 5118 server.go:72] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.862688 5118 state_mem.go:36] "Initialized new in-memory state store" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.862845 5118 server.go:1267] "Using root directory" path="/var/lib/kubelet" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.863379 5118 kubelet.go:491] "Attempting to sync node with API server" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.863397 5118 kubelet.go:386] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.863411 5118 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.863422 5118 kubelet.go:397] "Adding apiserver pod source" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.863433 5118 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.865842 5118 state_checkpoint.go:81] "State checkpoint: restored pod resource state from checkpoint" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.865871 5118 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Jan 21 00:09:04 crc kubenswrapper[5118]: E0121 00:09:04.866035 5118 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.4:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 21 00:09:04 crc kubenswrapper[5118]: E0121 00:09:04.866475 5118 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.4:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.868038 5118 state_checkpoint.go:81] "State checkpoint: restored pod resource state from checkpoint" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.868053 5118 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.869766 5118 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="cri-o" version="1.33.5-3.rhaos4.20.gitd0ea985.el9" apiVersion="v1" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.870131 5118 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-server-current.pem" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.870757 5118 kubelet.go:953] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.871594 5118 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.871652 5118 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.871671 5118 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.871690 5118 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.871699 5118 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.871707 5118 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.871716 5118 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.871725 5118 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.871742 5118 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.871755 5118 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.871779 5118 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.871930 5118 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.872144 5118 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.872170 5118 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/image" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.873362 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.4:6443: connect: connection refused Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.883330 5118 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.883410 5118 server.go:1295] "Started kubelet" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.883880 5118 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.883978 5118 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.884079 5118 server_v1.go:47] "podresources" method="list" useActivePods=true Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.884965 5118 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 21 00:09:04 crc systemd[1]: Started Kubernetes Kubelet. Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.885651 5118 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 21 00:09:04 crc kubenswrapper[5118]: E0121 00:09:04.885296 5118 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.129.56.4:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188c966c25ba76ff default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:04.883373823 +0000 UTC m=+0.207620851,LastTimestamp:2026-01-21 00:09:04.883373823 +0000 UTC m=+0.207620851,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.885733 5118 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kubelet-serving" Jan 21 00:09:04 crc kubenswrapper[5118]: E0121 00:09:04.886103 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.886410 5118 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.886467 5118 volume_manager.go:295] "The desired_state_of_world populator starts" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.886480 5118 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 21 00:09:04 crc kubenswrapper[5118]: E0121 00:09:04.886632 5118 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.4:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.887396 5118 factory.go:55] Registering systemd factory Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.887443 5118 factory.go:223] Registration of the systemd container factory successfully Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.888108 5118 factory.go:153] Registering CRI-O factory Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.888142 5118 factory.go:223] Registration of the crio container factory successfully Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.888233 5118 factory.go:221] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.888259 5118 factory.go:103] Registering Raw factory Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.888268 5118 server.go:317] "Adding debug handlers to kubelet server" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.888277 5118 manager.go:1196] Started watching for new ooms in manager Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.889669 5118 manager.go:319] Starting recovery of all containers Jan 21 00:09:04 crc kubenswrapper[5118]: E0121 00:09:04.888991 5118 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.4:6443: connect: connection refused" interval="200ms" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.916780 5118 manager.go:324] Recovery completed Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.928469 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.928558 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42a11a02-47e1-488f-b270-2679d3298b0e" volumeName="kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.928572 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.928582 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.928591 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.928602 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.928616 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.928627 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.928643 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.928654 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.928667 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.928677 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.928687 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.928697 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a14caf222afb62aaabdc47808b6f944" volumeName="kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.928715 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.928733 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.928747 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.928758 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" volumeName="kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.928771 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.928783 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ee8fbd3-1f81-4666-96da-5afc70819f1a" volumeName="kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.928796 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.928808 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" volumeName="kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.928819 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.928828 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.928840 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.928849 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.928858 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.928868 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.928904 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.928930 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.928945 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.928966 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.928979 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.928989 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.929000 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.929009 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.929020 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.929030 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.929040 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.929049 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.929059 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.929076 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.929096 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.929112 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34177974-8d82-49d2-a763-391d0df3bbd8" volumeName="kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.929123 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" volumeName="kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.929135 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.929147 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.929179 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.929193 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.929209 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.929218 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" volumeName="kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.929228 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.929240 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.929251 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.929261 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.929271 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" volumeName="kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.929287 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.929298 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.929310 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.929320 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.929330 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.929340 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.929352 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.929361 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.929373 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.929383 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.929392 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.929402 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.929413 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.929425 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.929435 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.929445 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.929487 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.929498 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.929508 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.929519 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.929534 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.929557 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.929571 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.929584 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.929597 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.929609 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.929620 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.929632 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.929645 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.929659 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.929673 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.929687 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0effdbcf-dd7d-404d-9d48-77536d665a5d" volumeName="kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.929702 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20c5c5b4bed930554494851fe3cb2b2a" volumeName="kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.929714 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.929734 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.929746 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" volumeName="kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.929759 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.929771 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.929784 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" volumeName="kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.929798 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.929811 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.929826 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.929839 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.929853 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.929866 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ebfebf6-3ecd-458e-943f-bb25b52e2718" volumeName="kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.929879 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.929892 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.929905 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.929917 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.929931 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.929944 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.929958 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.929971 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.929984 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.929997 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.930009 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.930037 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.930050 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.930063 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.930077 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.930090 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.930102 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.930115 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.930130 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.930146 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.930270 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7e2c886-118e-43bb-bef1-c78134de392b" volumeName="kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.930290 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.930305 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.930317 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.930331 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.930344 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" volumeName="kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.930356 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.930374 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.930388 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" volumeName="kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.930400 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.930415 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.930427 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.930439 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.932356 5118 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1264ac67579ad07e7e9003054d44fe40dd55285a4b2f7dc74e48be1aee0868a/globalmount" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.932391 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.932409 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.932425 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.932459 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.932473 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a208c9c2-333b-4b4a-be0d-bc32ec38a821" volumeName="kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.932488 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af41de71-79cf-4590-bbe9-9e8b848862cb" volumeName="kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.932545 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.932560 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.932572 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.932588 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42a11a02-47e1-488f-b270-2679d3298b0e" volumeName="kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.932601 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.932614 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.932631 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.932645 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.932660 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.932675 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.932687 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.932700 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.932714 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.932726 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.932740 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.932754 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.932768 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.932785 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.932802 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.932816 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.932830 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.932845 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.932862 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.932878 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.932892 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.932906 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.932919 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.932933 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7e2c886-118e-43bb-bef1-c78134de392b" volumeName="kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.932947 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.932962 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.932975 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.932988 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.933000 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.933015 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.933030 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.933044 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.933060 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.933076 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.933089 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.933103 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.933116 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.933129 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.933142 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.933178 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.933195 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.933252 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a14caf222afb62aaabdc47808b6f944" volumeName="kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.933268 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.933283 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.933296 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.933310 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.933325 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.933358 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.933375 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.933389 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.933403 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.933421 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.933436 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.933450 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.933465 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ebfebf6-3ecd-458e-943f-bb25b52e2718" volumeName="kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.933555 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.933572 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.933587 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.933599 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.933615 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.933630 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.933644 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.933657 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.933672 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.933684 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.933697 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="17b87002-b798-480a-8e17-83053d698239" volumeName="kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.933711 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a208c9c2-333b-4b4a-be0d-bc32ec38a821" volumeName="kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.933725 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b638b8f4bb0070e40528db779baf6a2" volumeName="kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.933739 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.933751 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.933764 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.933779 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.933796 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.933810 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.933823 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.933836 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.933849 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.933864 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e093be35-bb62-4843-b2e8-094545761610" volumeName="kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.933878 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.933895 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.933997 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.934015 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.934028 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" volumeName="kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.934042 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.934058 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.934073 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.934086 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.934103 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34177974-8d82-49d2-a763-391d0df3bbd8" volumeName="kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.934120 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.934144 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.934177 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.934191 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ee8fbd3-1f81-4666-96da-5afc70819f1a" volumeName="kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.934192 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.934204 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.934303 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.934319 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f863fff9-286a-45fa-b8f0-8a86994b8440" volumeName="kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.934335 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.934348 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.934396 5118 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert" seLinuxMountContext="" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.934410 5118 reconstruct.go:97] "Volume reconstruction finished" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.934418 5118 reconciler.go:26] "Reconciler: start to sync state" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.937549 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.937590 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.937604 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.939515 5118 cpu_manager.go:222] "Starting CPU manager" policy="none" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.939532 5118 cpu_manager.go:223] "Reconciling" reconcilePeriod="10s" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.939558 5118 state_mem.go:36] "Initialized new in-memory state store" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.947548 5118 policy_none.go:49] "None policy: Start" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.947592 5118 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.947622 5118 state_mem.go:35] "Initializing new in-memory state store" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.971108 5118 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.974340 5118 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.974407 5118 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.974452 5118 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 21 00:09:04 crc kubenswrapper[5118]: I0121 00:09:04.974471 5118 kubelet.go:2451] "Starting kubelet main sync loop" Jan 21 00:09:04 crc kubenswrapper[5118]: E0121 00:09:04.974749 5118 kubelet.go:2475] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 21 00:09:04 crc kubenswrapper[5118]: E0121 00:09:04.977960 5118 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.4:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 21 00:09:04 crc kubenswrapper[5118]: E0121 00:09:04.986786 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.011318 5118 manager.go:341] "Starting Device Plugin manager" Jan 21 00:09:05 crc kubenswrapper[5118]: E0121 00:09:05.011897 5118 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.011918 5118 server.go:85] "Starting device plugin registration server" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.012377 5118 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.012399 5118 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.012562 5118 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.012656 5118 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.012665 5118 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 21 00:09:05 crc kubenswrapper[5118]: E0121 00:09:05.017101 5118 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="non-existent label \"crio-containers\"" Jan 21 00:09:05 crc kubenswrapper[5118]: E0121 00:09:05.017181 5118 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.075658 5118 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc"] Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.075898 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.077280 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.077339 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.077352 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.078574 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.078969 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.079014 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.082196 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.082229 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.082311 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.082329 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.082245 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.082384 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.083104 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.083330 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.083393 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.083965 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.083997 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.084009 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.084013 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.084070 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.084097 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.085290 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.085425 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.085471 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.086176 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.086220 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.086238 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.086278 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.086325 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.086336 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.086983 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.087170 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.087226 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.087567 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.087597 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.087613 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.087882 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.087916 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.087930 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.088492 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.088540 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.089101 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.089130 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.089139 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:09:05 crc kubenswrapper[5118]: E0121 00:09:05.091039 5118 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.4:6443: connect: connection refused" interval="400ms" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.112723 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.113750 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.113801 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.113816 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.113842 5118 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 21 00:09:05 crc kubenswrapper[5118]: E0121 00:09:05.114391 5118 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.4:6443: connect: connection refused" node="crc" Jan 21 00:09:05 crc kubenswrapper[5118]: E0121 00:09:05.126451 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 00:09:05 crc kubenswrapper[5118]: E0121 00:09:05.134086 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.138983 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.139233 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.139277 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.139308 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-run-kubernetes\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.139332 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.139357 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.139388 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.139415 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.139581 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.139681 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.139718 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.139748 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.139778 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.139799 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.139836 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.139866 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.139890 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.140045 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-run-kubernetes\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.140084 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.140172 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.140214 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.140349 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.140487 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 21 00:09:05 crc kubenswrapper[5118]: E0121 00:09:05.155064 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 00:09:05 crc kubenswrapper[5118]: E0121 00:09:05.177424 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 00:09:05 crc kubenswrapper[5118]: E0121 00:09:05.185138 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.241384 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.241427 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.241441 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.241457 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.241487 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.241500 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.241519 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.241546 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.241587 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.241589 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.241610 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.241628 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.241721 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.241724 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.241755 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.241783 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.241810 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.241838 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.241859 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.241890 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.241919 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.241942 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.241916 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.241981 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.241999 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.242061 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.242098 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.242131 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.242182 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.242190 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.242190 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.242223 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.242242 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.315364 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.316086 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.316114 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.316123 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.316148 5118 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 21 00:09:05 crc kubenswrapper[5118]: E0121 00:09:05.316650 5118 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.4:6443: connect: connection refused" node="crc" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.343830 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.343878 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.343897 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.343984 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.344068 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.344207 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.427420 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.435031 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 00:09:05 crc kubenswrapper[5118]: W0121 00:09:05.455570 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f0bc7fcb0822a2c13eb2d22cd8c0641.slice/crio-73b5bcfad7a5bcf721922839bc91f1a24e92b5028d9cf777abccd95e0a255f49 WatchSource:0}: Error finding container 73b5bcfad7a5bcf721922839bc91f1a24e92b5028d9cf777abccd95e0a255f49: Status 404 returned error can't find the container with id 73b5bcfad7a5bcf721922839bc91f1a24e92b5028d9cf777abccd95e0a255f49 Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.455715 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.462528 5118 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 00:09:05 crc kubenswrapper[5118]: W0121 00:09:05.463150 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b638b8f4bb0070e40528db779baf6a2.slice/crio-afdfae15f69900f3baf8fe712ad1c94b4c0cdb0b6f524ce725a80f366b51ed51 WatchSource:0}: Error finding container afdfae15f69900f3baf8fe712ad1c94b4c0cdb0b6f524ce725a80f366b51ed51: Status 404 returned error can't find the container with id afdfae15f69900f3baf8fe712ad1c94b4c0cdb0b6f524ce725a80f366b51ed51 Jan 21 00:09:05 crc kubenswrapper[5118]: W0121 00:09:05.474223 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e08c320b1e9e2405e6e0107bdf7eeb4.slice/crio-bf886450ca3222de8335b6d58fd25fbdb8b42552007ad31cc6da0d97211842ef WatchSource:0}: Error finding container bf886450ca3222de8335b6d58fd25fbdb8b42552007ad31cc6da0d97211842ef: Status 404 returned error can't find the container with id bf886450ca3222de8335b6d58fd25fbdb8b42552007ad31cc6da0d97211842ef Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.478404 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.486488 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 00:09:05 crc kubenswrapper[5118]: E0121 00:09:05.492804 5118 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.4:6443: connect: connection refused" interval="800ms" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.717877 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.719694 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.719756 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.719772 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.719806 5118 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 21 00:09:05 crc kubenswrapper[5118]: E0121 00:09:05.720445 5118 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.4:6443: connect: connection refused" node="crc" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.874525 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.4:6443: connect: connection refused Jan 21 00:09:05 crc kubenswrapper[5118]: E0121 00:09:05.908651 5118 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.4:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 21 00:09:05 crc kubenswrapper[5118]: E0121 00:09:05.940761 5118 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.4:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.984335 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"baa70dfa761c59bc53fd8ebdea8a4a5e2913b8ccba0a4fad20e1b834c670872f"} Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.985534 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerStarted","Data":"bf886450ca3222de8335b6d58fd25fbdb8b42552007ad31cc6da0d97211842ef"} Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.986426 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"afdfae15f69900f3baf8fe712ad1c94b4c0cdb0b6f524ce725a80f366b51ed51"} Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.987524 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"73b5bcfad7a5bcf721922839bc91f1a24e92b5028d9cf777abccd95e0a255f49"} Jan 21 00:09:05 crc kubenswrapper[5118]: I0121 00:09:05.988475 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"1c984ca57cfc62e90124a4621d218b49d5d013116cdc1271caa7301bd9caa4c4"} Jan 21 00:09:06 crc kubenswrapper[5118]: E0121 00:09:06.052194 5118 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.4:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 21 00:09:06 crc kubenswrapper[5118]: E0121 00:09:06.293801 5118 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.4:6443: connect: connection refused" interval="1.6s" Jan 21 00:09:06 crc kubenswrapper[5118]: E0121 00:09:06.320382 5118 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.4:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 21 00:09:06 crc kubenswrapper[5118]: I0121 00:09:06.520612 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:09:06 crc kubenswrapper[5118]: I0121 00:09:06.522140 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:09:06 crc kubenswrapper[5118]: I0121 00:09:06.522223 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:09:06 crc kubenswrapper[5118]: I0121 00:09:06.522240 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:09:06 crc kubenswrapper[5118]: I0121 00:09:06.522284 5118 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 21 00:09:06 crc kubenswrapper[5118]: E0121 00:09:06.523035 5118 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.4:6443: connect: connection refused" node="crc" Jan 21 00:09:06 crc kubenswrapper[5118]: I0121 00:09:06.874588 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.4:6443: connect: connection refused Jan 21 00:09:06 crc kubenswrapper[5118]: I0121 00:09:06.990119 5118 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Jan 21 00:09:06 crc kubenswrapper[5118]: E0121 00:09:06.991654 5118 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.129.56.4:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 21 00:09:06 crc kubenswrapper[5118]: I0121 00:09:06.993732 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"93e48b61d0a2e616f65259ffbca42d9d000600a9f57c456e9fafc249cbbfa187"} Jan 21 00:09:06 crc kubenswrapper[5118]: I0121 00:09:06.993794 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"531e890ac624829dfeab5674374a20bf8f80e96fe3ad6baff6532501d078f297"} Jan 21 00:09:06 crc kubenswrapper[5118]: I0121 00:09:06.993807 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"7da13cdac196a74d6f3d3fe06fd8b8f1b93152d831e98ee1b66f4bd30f77756b"} Jan 21 00:09:06 crc kubenswrapper[5118]: I0121 00:09:06.993820 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"f71ef5ad6b3cecbe91cf0fa1e4e8759ddda878222a1c71e9801313336e424458"} Jan 21 00:09:06 crc kubenswrapper[5118]: I0121 00:09:06.993842 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:09:06 crc kubenswrapper[5118]: I0121 00:09:06.994447 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:09:06 crc kubenswrapper[5118]: I0121 00:09:06.994488 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:09:06 crc kubenswrapper[5118]: I0121 00:09:06.994500 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:09:06 crc kubenswrapper[5118]: E0121 00:09:06.994693 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 00:09:06 crc kubenswrapper[5118]: I0121 00:09:06.994983 5118 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="fe33373ea281d91b409c5cb5d649102effdff8d81e629af326cf6ec982abd031" exitCode=0 Jan 21 00:09:06 crc kubenswrapper[5118]: I0121 00:09:06.995081 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:09:06 crc kubenswrapper[5118]: I0121 00:09:06.995066 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"fe33373ea281d91b409c5cb5d649102effdff8d81e629af326cf6ec982abd031"} Jan 21 00:09:06 crc kubenswrapper[5118]: I0121 00:09:06.995652 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:09:06 crc kubenswrapper[5118]: I0121 00:09:06.995683 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:09:06 crc kubenswrapper[5118]: I0121 00:09:06.995697 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:09:06 crc kubenswrapper[5118]: E0121 00:09:06.995933 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 00:09:06 crc kubenswrapper[5118]: I0121 00:09:06.997417 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:09:06 crc kubenswrapper[5118]: I0121 00:09:06.997817 5118 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="4d69bc807ea39c1b6e472b26c0ed0618f1da285bdc9fd839f01a7c311aa34ef4" exitCode=0 Jan 21 00:09:06 crc kubenswrapper[5118]: I0121 00:09:06.997894 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"4d69bc807ea39c1b6e472b26c0ed0618f1da285bdc9fd839f01a7c311aa34ef4"} Jan 21 00:09:06 crc kubenswrapper[5118]: I0121 00:09:06.998029 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:09:06 crc kubenswrapper[5118]: I0121 00:09:06.998042 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:09:06 crc kubenswrapper[5118]: I0121 00:09:06.998073 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:09:06 crc kubenswrapper[5118]: I0121 00:09:06.998086 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:09:06 crc kubenswrapper[5118]: I0121 00:09:06.998768 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:09:06 crc kubenswrapper[5118]: I0121 00:09:06.998793 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:09:06 crc kubenswrapper[5118]: I0121 00:09:06.998805 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:09:06 crc kubenswrapper[5118]: E0121 00:09:06.998981 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 00:09:06 crc kubenswrapper[5118]: I0121 00:09:06.999314 5118 generic.go:358] "Generic (PLEG): container finished" podID="4e08c320b1e9e2405e6e0107bdf7eeb4" containerID="b22458e11edc4b0d627c073d6b85af24b2b8e00bc13359490fd9ffac95677b1c" exitCode=0 Jan 21 00:09:06 crc kubenswrapper[5118]: I0121 00:09:06.999409 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerDied","Data":"b22458e11edc4b0d627c073d6b85af24b2b8e00bc13359490fd9ffac95677b1c"} Jan 21 00:09:06 crc kubenswrapper[5118]: I0121 00:09:06.999387 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:09:07 crc kubenswrapper[5118]: I0121 00:09:07.001814 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:09:07 crc kubenswrapper[5118]: I0121 00:09:07.001845 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:09:07 crc kubenswrapper[5118]: I0121 00:09:07.001864 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:09:07 crc kubenswrapper[5118]: E0121 00:09:07.002066 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 00:09:07 crc kubenswrapper[5118]: I0121 00:09:07.002681 5118 generic.go:358] "Generic (PLEG): container finished" podID="0b638b8f4bb0070e40528db779baf6a2" containerID="d4b43bf3081fdf5d3820618be5c2c1f9db8f1b4b344beee6c6fb8f37064a0308" exitCode=0 Jan 21 00:09:07 crc kubenswrapper[5118]: I0121 00:09:07.002749 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerDied","Data":"d4b43bf3081fdf5d3820618be5c2c1f9db8f1b4b344beee6c6fb8f37064a0308"} Jan 21 00:09:07 crc kubenswrapper[5118]: I0121 00:09:07.002882 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:09:07 crc kubenswrapper[5118]: I0121 00:09:07.003456 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:09:07 crc kubenswrapper[5118]: I0121 00:09:07.003493 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:09:07 crc kubenswrapper[5118]: I0121 00:09:07.003517 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:09:07 crc kubenswrapper[5118]: E0121 00:09:07.003910 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 00:09:07 crc kubenswrapper[5118]: E0121 00:09:07.519400 5118 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.4:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 21 00:09:08 crc kubenswrapper[5118]: I0121 00:09:08.006703 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"e4ebe604136e4069bea4c85be28e37c74101654b01f8f97f9dd6c5e6d2089369"} Jan 21 00:09:08 crc kubenswrapper[5118]: I0121 00:09:08.006772 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"958df862a38bf6ffb6e4b273a5f651ea46a717132f869a0fffedeb2ed05ad6a1"} Jan 21 00:09:08 crc kubenswrapper[5118]: I0121 00:09:08.006782 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"fe7f2363564030a94c2b5aaa62bb4c132cd16d168cb151c5b3b9447b2e1be495"} Jan 21 00:09:08 crc kubenswrapper[5118]: I0121 00:09:08.008461 5118 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="afa5b4348d36675413cb2b023cbf9c78e23c0ee25d1298fecd6ef727c36d64ec" exitCode=0 Jan 21 00:09:08 crc kubenswrapper[5118]: I0121 00:09:08.008513 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"afa5b4348d36675413cb2b023cbf9c78e23c0ee25d1298fecd6ef727c36d64ec"} Jan 21 00:09:08 crc kubenswrapper[5118]: I0121 00:09:08.008653 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:09:08 crc kubenswrapper[5118]: I0121 00:09:08.009724 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:09:08 crc kubenswrapper[5118]: I0121 00:09:08.009771 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:09:08 crc kubenswrapper[5118]: I0121 00:09:08.009787 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:09:08 crc kubenswrapper[5118]: E0121 00:09:08.010089 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 00:09:08 crc kubenswrapper[5118]: I0121 00:09:08.011379 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerStarted","Data":"69e28ae0052129054be6c0419161beea094bafc8c1cbcdcf5bf3436e7877d421"} Jan 21 00:09:08 crc kubenswrapper[5118]: I0121 00:09:08.011461 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:09:08 crc kubenswrapper[5118]: I0121 00:09:08.011941 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:09:08 crc kubenswrapper[5118]: I0121 00:09:08.011998 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:09:08 crc kubenswrapper[5118]: I0121 00:09:08.012008 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:09:08 crc kubenswrapper[5118]: E0121 00:09:08.012205 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 00:09:08 crc kubenswrapper[5118]: I0121 00:09:08.014762 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"0f3214d25bbdd49a8a29ce6f30a600024d862102e53bee5c64ac3f0880d97481"} Jan 21 00:09:08 crc kubenswrapper[5118]: I0121 00:09:08.014810 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"37fdbdbec8b545e1b3921af5413cad07f8ffa20745589533bc0fffa6ec9a42fe"} Jan 21 00:09:08 crc kubenswrapper[5118]: I0121 00:09:08.014821 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:09:08 crc kubenswrapper[5118]: I0121 00:09:08.014851 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:09:08 crc kubenswrapper[5118]: I0121 00:09:08.014826 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"01a6d01cbabb92bffcca05eb808b4bd0bee991f66f129422707d982e4e3d320f"} Jan 21 00:09:08 crc kubenswrapper[5118]: I0121 00:09:08.015522 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:09:08 crc kubenswrapper[5118]: I0121 00:09:08.015558 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:09:08 crc kubenswrapper[5118]: I0121 00:09:08.015570 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:09:08 crc kubenswrapper[5118]: I0121 00:09:08.015524 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:09:08 crc kubenswrapper[5118]: I0121 00:09:08.015634 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:09:08 crc kubenswrapper[5118]: I0121 00:09:08.015651 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:09:08 crc kubenswrapper[5118]: E0121 00:09:08.015799 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 00:09:08 crc kubenswrapper[5118]: E0121 00:09:08.015956 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 00:09:08 crc kubenswrapper[5118]: I0121 00:09:08.123221 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:09:08 crc kubenswrapper[5118]: I0121 00:09:08.124357 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:09:08 crc kubenswrapper[5118]: I0121 00:09:08.124413 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:09:08 crc kubenswrapper[5118]: I0121 00:09:08.124427 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:09:08 crc kubenswrapper[5118]: I0121 00:09:08.124456 5118 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 21 00:09:09 crc kubenswrapper[5118]: I0121 00:09:09.021115 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"46e3f4d2281defbe831ecaac3f2191effc8d95433fd22da93fd2bf2660080b7d"} Jan 21 00:09:09 crc kubenswrapper[5118]: I0121 00:09:09.021208 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"5420e1997e1749ea320776a71a52444ce92ddbd5ca9427f3c9efe5b951c27370"} Jan 21 00:09:09 crc kubenswrapper[5118]: I0121 00:09:09.021461 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:09:09 crc kubenswrapper[5118]: I0121 00:09:09.022394 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:09:09 crc kubenswrapper[5118]: I0121 00:09:09.022440 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:09:09 crc kubenswrapper[5118]: I0121 00:09:09.022452 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:09:09 crc kubenswrapper[5118]: E0121 00:09:09.022778 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 00:09:09 crc kubenswrapper[5118]: I0121 00:09:09.023213 5118 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="b755b446899236ed8c24e77229b7d2ca8cec62643e7c60e7d77fd8a26bdea1ea" exitCode=0 Jan 21 00:09:09 crc kubenswrapper[5118]: I0121 00:09:09.023367 5118 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 00:09:09 crc kubenswrapper[5118]: I0121 00:09:09.023409 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:09:09 crc kubenswrapper[5118]: I0121 00:09:09.024085 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"b755b446899236ed8c24e77229b7d2ca8cec62643e7c60e7d77fd8a26bdea1ea"} Jan 21 00:09:09 crc kubenswrapper[5118]: I0121 00:09:09.024093 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:09:09 crc kubenswrapper[5118]: I0121 00:09:09.024356 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:09:09 crc kubenswrapper[5118]: I0121 00:09:09.027490 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:09:09 crc kubenswrapper[5118]: I0121 00:09:09.027542 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:09:09 crc kubenswrapper[5118]: I0121 00:09:09.027549 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:09:09 crc kubenswrapper[5118]: I0121 00:09:09.027503 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:09:09 crc kubenswrapper[5118]: I0121 00:09:09.027584 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:09:09 crc kubenswrapper[5118]: I0121 00:09:09.027560 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:09:09 crc kubenswrapper[5118]: I0121 00:09:09.027599 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:09:09 crc kubenswrapper[5118]: I0121 00:09:09.027589 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:09:09 crc kubenswrapper[5118]: I0121 00:09:09.027656 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:09:09 crc kubenswrapper[5118]: E0121 00:09:09.027930 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 00:09:09 crc kubenswrapper[5118]: E0121 00:09:09.028238 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 00:09:09 crc kubenswrapper[5118]: E0121 00:09:09.028334 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 00:09:09 crc kubenswrapper[5118]: I0121 00:09:09.764263 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 00:09:09 crc kubenswrapper[5118]: I0121 00:09:09.764498 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:09:09 crc kubenswrapper[5118]: I0121 00:09:09.765461 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:09:09 crc kubenswrapper[5118]: I0121 00:09:09.765510 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:09:09 crc kubenswrapper[5118]: I0121 00:09:09.765521 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:09:09 crc kubenswrapper[5118]: E0121 00:09:09.765892 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 00:09:10 crc kubenswrapper[5118]: I0121 00:09:10.030941 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"9c43bace9e1ec4b78fc3886b886cfc9eb9505e5cd415b54a393092a5fb6bfede"} Jan 21 00:09:10 crc kubenswrapper[5118]: I0121 00:09:10.031000 5118 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 00:09:10 crc kubenswrapper[5118]: I0121 00:09:10.031047 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:09:10 crc kubenswrapper[5118]: I0121 00:09:10.031005 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"9d9f0111a3537cc924a7e201bcd1e6a41bc82e79b86ec8f1d33560c518239fe9"} Jan 21 00:09:10 crc kubenswrapper[5118]: I0121 00:09:10.031101 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"cde9ebfec14b67069eee7df51b0b8e257d4b7ccb5fc744f7cf08722b62167f08"} Jan 21 00:09:10 crc kubenswrapper[5118]: I0121 00:09:10.031573 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:09:10 crc kubenswrapper[5118]: I0121 00:09:10.031609 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:09:10 crc kubenswrapper[5118]: I0121 00:09:10.031622 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:09:10 crc kubenswrapper[5118]: E0121 00:09:10.031959 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 00:09:10 crc kubenswrapper[5118]: I0121 00:09:10.941423 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 00:09:10 crc kubenswrapper[5118]: I0121 00:09:10.942337 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:09:10 crc kubenswrapper[5118]: I0121 00:09:10.944049 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:09:10 crc kubenswrapper[5118]: I0121 00:09:10.944110 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:09:10 crc kubenswrapper[5118]: I0121 00:09:10.944123 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:09:10 crc kubenswrapper[5118]: E0121 00:09:10.944559 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 00:09:11 crc kubenswrapper[5118]: I0121 00:09:11.038143 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"898860c9529a12085df4c5531acb1bd4f2bf2dc8acc40c795bef9e642ab80c73"} Jan 21 00:09:11 crc kubenswrapper[5118]: I0121 00:09:11.038208 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"efa0534fa57e4334809de905bc9c6076a74ca99b2829d2716055befea0eb99ee"} Jan 21 00:09:11 crc kubenswrapper[5118]: I0121 00:09:11.038352 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:09:11 crc kubenswrapper[5118]: I0121 00:09:11.039058 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:09:11 crc kubenswrapper[5118]: I0121 00:09:11.039100 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:09:11 crc kubenswrapper[5118]: I0121 00:09:11.039114 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:09:11 crc kubenswrapper[5118]: E0121 00:09:11.039370 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 00:09:11 crc kubenswrapper[5118]: I0121 00:09:11.049639 5118 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Jan 21 00:09:11 crc kubenswrapper[5118]: I0121 00:09:11.068097 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 00:09:11 crc kubenswrapper[5118]: I0121 00:09:11.068404 5118 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 00:09:11 crc kubenswrapper[5118]: I0121 00:09:11.068453 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:09:11 crc kubenswrapper[5118]: I0121 00:09:11.069624 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:09:11 crc kubenswrapper[5118]: I0121 00:09:11.069687 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:09:11 crc kubenswrapper[5118]: I0121 00:09:11.069702 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:09:11 crc kubenswrapper[5118]: E0121 00:09:11.070101 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 00:09:11 crc kubenswrapper[5118]: I0121 00:09:11.098762 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 00:09:11 crc kubenswrapper[5118]: I0121 00:09:11.482822 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-etcd/etcd-crc" Jan 21 00:09:12 crc kubenswrapper[5118]: I0121 00:09:12.040997 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:09:12 crc kubenswrapper[5118]: I0121 00:09:12.041050 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:09:12 crc kubenswrapper[5118]: I0121 00:09:12.043257 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:09:12 crc kubenswrapper[5118]: I0121 00:09:12.043297 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:09:12 crc kubenswrapper[5118]: I0121 00:09:12.043311 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:09:12 crc kubenswrapper[5118]: I0121 00:09:12.043269 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:09:12 crc kubenswrapper[5118]: I0121 00:09:12.043370 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:09:12 crc kubenswrapper[5118]: I0121 00:09:12.043395 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:09:12 crc kubenswrapper[5118]: E0121 00:09:12.043754 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 00:09:12 crc kubenswrapper[5118]: E0121 00:09:12.044113 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 00:09:12 crc kubenswrapper[5118]: I0121 00:09:12.612359 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 00:09:12 crc kubenswrapper[5118]: I0121 00:09:12.612787 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:09:12 crc kubenswrapper[5118]: I0121 00:09:12.614314 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:09:12 crc kubenswrapper[5118]: I0121 00:09:12.614407 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:09:12 crc kubenswrapper[5118]: I0121 00:09:12.614432 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:09:12 crc kubenswrapper[5118]: E0121 00:09:12.615039 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 00:09:13 crc kubenswrapper[5118]: I0121 00:09:13.037273 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 00:09:13 crc kubenswrapper[5118]: I0121 00:09:13.043524 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:09:13 crc kubenswrapper[5118]: I0121 00:09:13.043626 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:09:13 crc kubenswrapper[5118]: I0121 00:09:13.044735 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:09:13 crc kubenswrapper[5118]: I0121 00:09:13.044804 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:09:13 crc kubenswrapper[5118]: I0121 00:09:13.044816 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:09:13 crc kubenswrapper[5118]: I0121 00:09:13.044810 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:09:13 crc kubenswrapper[5118]: I0121 00:09:13.044863 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:09:13 crc kubenswrapper[5118]: I0121 00:09:13.044876 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:09:13 crc kubenswrapper[5118]: E0121 00:09:13.045254 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 00:09:13 crc kubenswrapper[5118]: E0121 00:09:13.046019 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 00:09:13 crc kubenswrapper[5118]: I0121 00:09:13.400885 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 00:09:13 crc kubenswrapper[5118]: I0121 00:09:13.401136 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:09:13 crc kubenswrapper[5118]: I0121 00:09:13.402358 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:09:13 crc kubenswrapper[5118]: I0121 00:09:13.402436 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:09:13 crc kubenswrapper[5118]: I0121 00:09:13.402457 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:09:13 crc kubenswrapper[5118]: E0121 00:09:13.403151 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 00:09:14 crc kubenswrapper[5118]: I0121 00:09:14.963455 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 00:09:14 crc kubenswrapper[5118]: I0121 00:09:14.963667 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:09:14 crc kubenswrapper[5118]: I0121 00:09:14.964767 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:09:14 crc kubenswrapper[5118]: I0121 00:09:14.964799 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:09:14 crc kubenswrapper[5118]: I0121 00:09:14.964811 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:09:14 crc kubenswrapper[5118]: E0121 00:09:14.965091 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 00:09:14 crc kubenswrapper[5118]: I0121 00:09:14.970046 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 00:09:15 crc kubenswrapper[5118]: E0121 00:09:15.017412 5118 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 21 00:09:15 crc kubenswrapper[5118]: I0121 00:09:15.048722 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:09:15 crc kubenswrapper[5118]: I0121 00:09:15.051242 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:09:15 crc kubenswrapper[5118]: I0121 00:09:15.051328 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:09:15 crc kubenswrapper[5118]: I0121 00:09:15.051367 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:09:15 crc kubenswrapper[5118]: E0121 00:09:15.052122 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 00:09:15 crc kubenswrapper[5118]: I0121 00:09:15.612347 5118 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": context deadline exceeded" start-of-body= Jan 21 00:09:15 crc kubenswrapper[5118]: I0121 00:09:15.612466 5118 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": context deadline exceeded" Jan 21 00:09:15 crc kubenswrapper[5118]: I0121 00:09:15.834364 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 21 00:09:15 crc kubenswrapper[5118]: I0121 00:09:15.834687 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:09:15 crc kubenswrapper[5118]: I0121 00:09:15.835708 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:09:15 crc kubenswrapper[5118]: I0121 00:09:15.835771 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:09:15 crc kubenswrapper[5118]: I0121 00:09:15.835788 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:09:15 crc kubenswrapper[5118]: E0121 00:09:15.836286 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 00:09:17 crc kubenswrapper[5118]: I0121 00:09:17.874655 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 21 00:09:17 crc kubenswrapper[5118]: E0121 00:09:17.895498 5118 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="3.2s" Jan 21 00:09:18 crc kubenswrapper[5118]: E0121 00:09:18.125530 5118 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Jan 21 00:09:18 crc kubenswrapper[5118]: I0121 00:09:18.487984 5118 trace.go:236] Trace[224980664]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Jan-2026 00:09:08.486) (total time: 10001ms): Jan 21 00:09:18 crc kubenswrapper[5118]: Trace[224980664]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (00:09:18.487) Jan 21 00:09:18 crc kubenswrapper[5118]: Trace[224980664]: [10.001130428s] [10.001130428s] END Jan 21 00:09:18 crc kubenswrapper[5118]: E0121 00:09:18.488026 5118 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 21 00:09:18 crc kubenswrapper[5118]: I0121 00:09:18.629033 5118 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 21 00:09:18 crc kubenswrapper[5118]: I0121 00:09:18.629104 5118 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 21 00:09:18 crc kubenswrapper[5118]: I0121 00:09:18.635142 5118 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 21 00:09:18 crc kubenswrapper[5118]: I0121 00:09:18.635237 5118 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 21 00:09:21 crc kubenswrapper[5118]: E0121 00:09:21.103026 5118 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="6.4s" Jan 21 00:09:21 crc kubenswrapper[5118]: I0121 00:09:21.326521 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:09:21 crc kubenswrapper[5118]: I0121 00:09:21.327794 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:09:21 crc kubenswrapper[5118]: I0121 00:09:21.327857 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:09:21 crc kubenswrapper[5118]: I0121 00:09:21.327872 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:09:21 crc kubenswrapper[5118]: I0121 00:09:21.327905 5118 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 21 00:09:21 crc kubenswrapper[5118]: E0121 00:09:21.338508 5118 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 21 00:09:23 crc kubenswrapper[5118]: E0121 00:09:23.034260 5118 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 21 00:09:23 crc kubenswrapper[5118]: I0121 00:09:23.401691 5118 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 21 00:09:23 crc kubenswrapper[5118]: I0121 00:09:23.401799 5118 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 21 00:09:23 crc kubenswrapper[5118]: I0121 00:09:23.410952 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 00:09:23 crc kubenswrapper[5118]: I0121 00:09:23.411226 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:09:23 crc kubenswrapper[5118]: I0121 00:09:23.411890 5118 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 21 00:09:23 crc kubenswrapper[5118]: I0121 00:09:23.411931 5118 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 21 00:09:23 crc kubenswrapper[5118]: I0121 00:09:23.412197 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:09:23 crc kubenswrapper[5118]: I0121 00:09:23.412275 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:09:23 crc kubenswrapper[5118]: I0121 00:09:23.412291 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:09:23 crc kubenswrapper[5118]: E0121 00:09:23.412653 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 00:09:23 crc kubenswrapper[5118]: I0121 00:09:23.417895 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 00:09:23 crc kubenswrapper[5118]: I0121 00:09:23.636228 5118 trace.go:236] Trace[985025781]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Jan-2026 00:09:09.094) (total time: 14541ms): Jan 21 00:09:23 crc kubenswrapper[5118]: Trace[985025781]: ---"Objects listed" error:services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope 14541ms (00:09:23.636) Jan 21 00:09:23 crc kubenswrapper[5118]: Trace[985025781]: [14.541375604s] [14.541375604s] END Jan 21 00:09:23 crc kubenswrapper[5118]: E0121 00:09:23.636535 5118 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 21 00:09:23 crc kubenswrapper[5118]: I0121 00:09:23.636352 5118 trace.go:236] Trace[1371723128]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Jan-2026 00:09:08.925) (total time: 14710ms): Jan 21 00:09:23 crc kubenswrapper[5118]: Trace[1371723128]: ---"Objects listed" error:csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope 14710ms (00:09:23.636) Jan 21 00:09:23 crc kubenswrapper[5118]: Trace[1371723128]: [14.710886106s] [14.710886106s] END Jan 21 00:09:23 crc kubenswrapper[5118]: E0121 00:09:23.636654 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188c966c25ba76ff default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:04.883373823 +0000 UTC m=+0.207620851,LastTimestamp:2026-01-21 00:09:04.883373823 +0000 UTC m=+0.207620851,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:09:23 crc kubenswrapper[5118]: E0121 00:09:23.636825 5118 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 21 00:09:23 crc kubenswrapper[5118]: I0121 00:09:23.636497 5118 trace.go:236] Trace[1281250154]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Jan-2026 00:09:11.937) (total time: 11699ms): Jan 21 00:09:23 crc kubenswrapper[5118]: Trace[1281250154]: ---"Objects listed" error:nodes "crc" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope 11699ms (00:09:23.636) Jan 21 00:09:23 crc kubenswrapper[5118]: Trace[1281250154]: [11.699089718s] [11.699089718s] END Jan 21 00:09:23 crc kubenswrapper[5118]: E0121 00:09:23.636950 5118 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 21 00:09:23 crc kubenswrapper[5118]: I0121 00:09:23.637539 5118 reflector.go:430] "Caches populated" logger="kubernetes.io/kube-apiserver-client-kubelet" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Jan 21 00:09:23 crc kubenswrapper[5118]: E0121 00:09:23.641309 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188c966c28f58a50 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:04.93757704 +0000 UTC m=+0.261824058,LastTimestamp:2026-01-21 00:09:04.93757704 +0000 UTC m=+0.261824058,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:09:23 crc kubenswrapper[5118]: E0121 00:09:23.649601 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188c966c28f5d512 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:04.937596178 +0000 UTC m=+0.261843196,LastTimestamp:2026-01-21 00:09:04.937596178 +0000 UTC m=+0.261843196,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:09:23 crc kubenswrapper[5118]: E0121 00:09:23.654014 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188c966c28f608bf default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:04.937609407 +0000 UTC m=+0.261856425,LastTimestamp:2026-01-21 00:09:04.937609407 +0000 UTC m=+0.261856425,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:09:23 crc kubenswrapper[5118]: E0121 00:09:23.661560 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188c966c2d99e753 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:05.015457619 +0000 UTC m=+0.339704647,LastTimestamp:2026-01-21 00:09:05.015457619 +0000 UTC m=+0.339704647,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:09:23 crc kubenswrapper[5118]: E0121 00:09:23.666864 5118 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188c966c28f58a50\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188c966c28f58a50 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:04.93757704 +0000 UTC m=+0.261824058,LastTimestamp:2026-01-21 00:09:05.077308873 +0000 UTC m=+0.401555891,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:09:23 crc kubenswrapper[5118]: E0121 00:09:23.671568 5118 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188c966c28f5d512\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188c966c28f5d512 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:04.937596178 +0000 UTC m=+0.261843196,LastTimestamp:2026-01-21 00:09:05.077347238 +0000 UTC m=+0.401594256,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:09:23 crc kubenswrapper[5118]: E0121 00:09:23.675179 5118 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188c966c28f608bf\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188c966c28f608bf default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:04.937609407 +0000 UTC m=+0.261856425,LastTimestamp:2026-01-21 00:09:05.077357367 +0000 UTC m=+0.401604385,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:09:23 crc kubenswrapper[5118]: E0121 00:09:23.679467 5118 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188c966c28f58a50\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188c966c28f58a50 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:04.93757704 +0000 UTC m=+0.261824058,LastTimestamp:2026-01-21 00:09:05.082221994 +0000 UTC m=+0.406469012,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:09:23 crc kubenswrapper[5118]: E0121 00:09:23.683377 5118 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188c966c28f58a50\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188c966c28f58a50 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:04.93757704 +0000 UTC m=+0.261824058,LastTimestamp:2026-01-21 00:09:05.082295086 +0000 UTC m=+0.406542104,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:09:23 crc kubenswrapper[5118]: E0121 00:09:23.688480 5118 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188c966c28f5d512\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188c966c28f5d512 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:04.937596178 +0000 UTC m=+0.261843196,LastTimestamp:2026-01-21 00:09:05.082322083 +0000 UTC m=+0.406569111,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:09:23 crc kubenswrapper[5118]: E0121 00:09:23.693332 5118 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188c966c28f608bf\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188c966c28f608bf default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:04.937609407 +0000 UTC m=+0.261856425,LastTimestamp:2026-01-21 00:09:05.082335372 +0000 UTC m=+0.406582400,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:09:23 crc kubenswrapper[5118]: E0121 00:09:23.698055 5118 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188c966c28f5d512\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188c966c28f5d512 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:04.937596178 +0000 UTC m=+0.261843196,LastTimestamp:2026-01-21 00:09:05.082374277 +0000 UTC m=+0.406621295,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:09:23 crc kubenswrapper[5118]: E0121 00:09:23.702403 5118 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188c966c28f608bf\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188c966c28f608bf default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:04.937609407 +0000 UTC m=+0.261856425,LastTimestamp:2026-01-21 00:09:05.082390266 +0000 UTC m=+0.406637284,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:09:23 crc kubenswrapper[5118]: E0121 00:09:23.707242 5118 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188c966c28f58a50\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188c966c28f58a50 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:04.93757704 +0000 UTC m=+0.261824058,LastTimestamp:2026-01-21 00:09:05.083983964 +0000 UTC m=+0.408230982,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:09:23 crc kubenswrapper[5118]: E0121 00:09:23.711928 5118 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188c966c28f5d512\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188c966c28f5d512 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:04.937596178 +0000 UTC m=+0.261843196,LastTimestamp:2026-01-21 00:09:05.084003832 +0000 UTC m=+0.408250840,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:09:23 crc kubenswrapper[5118]: E0121 00:09:23.715708 5118 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188c966c28f608bf\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188c966c28f608bf default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:04.937609407 +0000 UTC m=+0.261856425,LastTimestamp:2026-01-21 00:09:05.084013731 +0000 UTC m=+0.408260749,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:09:23 crc kubenswrapper[5118]: E0121 00:09:23.721297 5118 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188c966c28f58a50\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188c966c28f58a50 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:04.93757704 +0000 UTC m=+0.261824058,LastTimestamp:2026-01-21 00:09:05.084049087 +0000 UTC m=+0.408296105,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:09:23 crc kubenswrapper[5118]: E0121 00:09:23.728600 5118 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188c966c28f5d512\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188c966c28f5d512 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:04.937596178 +0000 UTC m=+0.261843196,LastTimestamp:2026-01-21 00:09:05.084080074 +0000 UTC m=+0.408327092,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:09:23 crc kubenswrapper[5118]: E0121 00:09:23.732195 5118 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188c966c28f608bf\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188c966c28f608bf default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:04.937609407 +0000 UTC m=+0.261856425,LastTimestamp:2026-01-21 00:09:05.084103821 +0000 UTC m=+0.408350839,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:09:23 crc kubenswrapper[5118]: E0121 00:09:23.736654 5118 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188c966c28f58a50\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188c966c28f58a50 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:04.93757704 +0000 UTC m=+0.261824058,LastTimestamp:2026-01-21 00:09:05.086200086 +0000 UTC m=+0.410447104,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:09:23 crc kubenswrapper[5118]: E0121 00:09:23.740803 5118 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188c966c28f5d512\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188c966c28f5d512 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:04.937596178 +0000 UTC m=+0.261843196,LastTimestamp:2026-01-21 00:09:05.086228463 +0000 UTC m=+0.410475491,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:09:23 crc kubenswrapper[5118]: E0121 00:09:23.741897 5118 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188c966c28f608bf\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188c966c28f608bf default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:04.937609407 +0000 UTC m=+0.261856425,LastTimestamp:2026-01-21 00:09:05.08624894 +0000 UTC m=+0.410495958,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:09:23 crc kubenswrapper[5118]: E0121 00:09:23.745095 5118 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188c966c28f58a50\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188c966c28f58a50 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:04.93757704 +0000 UTC m=+0.261824058,LastTimestamp:2026-01-21 00:09:05.086311564 +0000 UTC m=+0.410558582,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:09:23 crc kubenswrapper[5118]: E0121 00:09:23.748979 5118 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188c966c28f5d512\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188c966c28f5d512 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:04.937596178 +0000 UTC m=+0.261843196,LastTimestamp:2026-01-21 00:09:05.086331542 +0000 UTC m=+0.410578560,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:09:23 crc kubenswrapper[5118]: E0121 00:09:23.753060 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188c966c484533b7 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:05.462891447 +0000 UTC m=+0.787138465,LastTimestamp:2026-01-21 00:09:05.462891447 +0000 UTC m=+0.787138465,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:09:23 crc kubenswrapper[5118]: E0121 00:09:23.757269 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188c966c487e0948 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:05.466616136 +0000 UTC m=+0.790863174,LastTimestamp:2026-01-21 00:09:05.466616136 +0000 UTC m=+0.790863174,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:09:23 crc kubenswrapper[5118]: E0121 00:09:23.761341 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188c966c49192d48 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:05.476783432 +0000 UTC m=+0.801030450,LastTimestamp:2026-01-21 00:09:05.476783432 +0000 UTC m=+0.801030450,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:09:23 crc kubenswrapper[5118]: E0121 00:09:23.765649 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188c966c4ae96af7 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:05.507207927 +0000 UTC m=+0.831454945,LastTimestamp:2026-01-21 00:09:05.507207927 +0000 UTC m=+0.831454945,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:09:23 crc kubenswrapper[5118]: E0121 00:09:23.770128 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188c966c4aed1238 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:05.507447352 +0000 UTC m=+0.831694370,LastTimestamp:2026-01-21 00:09:05.507447352 +0000 UTC m=+0.831694370,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:09:23 crc kubenswrapper[5118]: E0121 00:09:23.775146 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188c966c6ad0a294 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:06.042454676 +0000 UTC m=+1.366701694,LastTimestamp:2026-01-21 00:09:06.042454676 +0000 UTC m=+1.366701694,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:09:23 crc kubenswrapper[5118]: E0121 00:09:23.780285 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188c966c6ad1d35a openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Created,Message:Created container: wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:06.042532698 +0000 UTC m=+1.366779716,LastTimestamp:2026-01-21 00:09:06.042532698 +0000 UTC m=+1.366779716,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:09:23 crc kubenswrapper[5118]: E0121 00:09:23.784768 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188c966c6b6927c0 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:06.05245024 +0000 UTC m=+1.376697258,LastTimestamp:2026-01-21 00:09:06.05245024 +0000 UTC m=+1.376697258,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:09:23 crc kubenswrapper[5118]: E0121 00:09:23.788715 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188c966c6b85a54e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:06.05431739 +0000 UTC m=+1.378564407,LastTimestamp:2026-01-21 00:09:06.05431739 +0000 UTC m=+1.378564407,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:09:23 crc kubenswrapper[5118]: E0121 00:09:23.793403 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188c966c6b880d41 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:06.054475073 +0000 UTC m=+1.378722091,LastTimestamp:2026-01-21 00:09:06.054475073 +0000 UTC m=+1.378722091,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:09:23 crc kubenswrapper[5118]: E0121 00:09:23.797391 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188c966c6b9f5a96 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Started,Message:Started container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:06.056002198 +0000 UTC m=+1.380249226,LastTimestamp:2026-01-21 00:09:06.056002198 +0000 UTC m=+1.380249226,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:09:23 crc kubenswrapper[5118]: E0121 00:09:23.801574 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188c966c6bab2d91 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:06.056777105 +0000 UTC m=+1.381024123,LastTimestamp:2026-01-21 00:09:06.056777105 +0000 UTC m=+1.381024123,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:09:23 crc kubenswrapper[5118]: E0121 00:09:23.810842 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188c966c6bbd2d28 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:06.057956648 +0000 UTC m=+1.382203666,LastTimestamp:2026-01-21 00:09:06.057956648 +0000 UTC m=+1.382203666,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:09:23 crc kubenswrapper[5118]: E0121 00:09:23.815422 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188c966c6c901ab4 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:06.07178002 +0000 UTC m=+1.396027038,LastTimestamp:2026-01-21 00:09:06.07178002 +0000 UTC m=+1.396027038,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:09:23 crc kubenswrapper[5118]: E0121 00:09:23.819402 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188c966c6ca11f9e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:06.07289539 +0000 UTC m=+1.397142408,LastTimestamp:2026-01-21 00:09:06.07289539 +0000 UTC m=+1.397142408,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:09:23 crc kubenswrapper[5118]: E0121 00:09:23.823841 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188c966c6d5a3e93 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:06.085027475 +0000 UTC m=+1.409274503,LastTimestamp:2026-01-21 00:09:06.085027475 +0000 UTC m=+1.409274503,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:09:23 crc kubenswrapper[5118]: E0121 00:09:23.827497 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188c966c7c605d77 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container: cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:06.337086839 +0000 UTC m=+1.661333857,LastTimestamp:2026-01-21 00:09:06.337086839 +0000 UTC m=+1.661333857,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:09:23 crc kubenswrapper[5118]: E0121 00:09:23.831027 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188c966c7cd9c9bc openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:06.345044412 +0000 UTC m=+1.669291430,LastTimestamp:2026-01-21 00:09:06.345044412 +0000 UTC m=+1.669291430,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:09:23 crc kubenswrapper[5118]: E0121 00:09:23.834703 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188c966c7cefe1d9 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:06.346492377 +0000 UTC m=+1.670739395,LastTimestamp:2026-01-21 00:09:06.346492377 +0000 UTC m=+1.670739395,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:09:23 crc kubenswrapper[5118]: E0121 00:09:23.839505 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188c966c9365a3b1 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Created,Message:Created container: kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:06.723308465 +0000 UTC m=+2.047555483,LastTimestamp:2026-01-21 00:09:06.723308465 +0000 UTC m=+2.047555483,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:09:23 crc kubenswrapper[5118]: E0121 00:09:23.847186 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188c966c944cfe21 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Started,Message:Started container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:06.738470433 +0000 UTC m=+2.062717451,LastTimestamp:2026-01-21 00:09:06.738470433 +0000 UTC m=+2.062717451,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:09:23 crc kubenswrapper[5118]: E0121 00:09:23.851011 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188c966c9468c368 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:06.740290408 +0000 UTC m=+2.064537426,LastTimestamp:2026-01-21 00:09:06.740290408 +0000 UTC m=+2.064537426,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:09:23 crc kubenswrapper[5118]: E0121 00:09:23.855010 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188c966ca0bf456a openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Created,Message:Created container: kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:06.947286378 +0000 UTC m=+2.271533396,LastTimestamp:2026-01-21 00:09:06.947286378 +0000 UTC m=+2.271533396,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:09:23 crc kubenswrapper[5118]: E0121 00:09:23.859951 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188c966ca16fefa6 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Started,Message:Started container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:06.958864294 +0000 UTC m=+2.283111322,LastTimestamp:2026-01-21 00:09:06.958864294 +0000 UTC m=+2.283111322,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:09:23 crc kubenswrapper[5118]: E0121 00:09:23.864646 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188c966ca3b9f291 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:06.997269137 +0000 UTC m=+2.321516145,LastTimestamp:2026-01-21 00:09:06.997269137 +0000 UTC m=+2.321516145,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:09:23 crc kubenswrapper[5118]: E0121 00:09:23.868923 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188c966ca3e3f3c7 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:07.000021959 +0000 UTC m=+2.324268987,LastTimestamp:2026-01-21 00:09:07.000021959 +0000 UTC m=+2.324268987,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:09:23 crc kubenswrapper[5118]: E0121 00:09:23.872823 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188c966ca414b6c2 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:07.003217602 +0000 UTC m=+2.327464620,LastTimestamp:2026-01-21 00:09:07.003217602 +0000 UTC m=+2.327464620,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:09:23 crc kubenswrapper[5118]: E0121 00:09:23.876341 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188c966ca44562d9 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:07.006407385 +0000 UTC m=+2.330654403,LastTimestamp:2026-01-21 00:09:07.006407385 +0000 UTC m=+2.330654403,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:09:23 crc kubenswrapper[5118]: I0121 00:09:23.876751 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 00:09:23 crc kubenswrapper[5118]: E0121 00:09:23.880067 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188c966cb9a46be9 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container: kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:07.364957161 +0000 UTC m=+2.689204209,LastTimestamp:2026-01-21 00:09:07.364957161 +0000 UTC m=+2.689204209,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:09:23 crc kubenswrapper[5118]: E0121 00:09:23.883913 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188c966cba142b43 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:07.372280643 +0000 UTC m=+2.696527661,LastTimestamp:2026-01-21 00:09:07.372280643 +0000 UTC m=+2.696527661,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:09:23 crc kubenswrapper[5118]: E0121 00:09:23.888100 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188c966cba15ac33 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container: kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:07.372379187 +0000 UTC m=+2.696626205,LastTimestamp:2026-01-21 00:09:07.372379187 +0000 UTC m=+2.696626205,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:09:23 crc kubenswrapper[5118]: E0121 00:09:23.893975 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188c966cba1ddb11 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Created,Message:Created container: etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:07.372915473 +0000 UTC m=+2.697162501,LastTimestamp:2026-01-21 00:09:07.372915473 +0000 UTC m=+2.697162501,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:09:23 crc kubenswrapper[5118]: E0121 00:09:23.898786 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188c966cba8ccc32 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:07.380186162 +0000 UTC m=+2.704433180,LastTimestamp:2026-01-21 00:09:07.380186162 +0000 UTC m=+2.704433180,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:09:23 crc kubenswrapper[5118]: E0121 00:09:23.903780 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188c966cba9edd93 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:07.381370259 +0000 UTC m=+2.705617277,LastTimestamp:2026-01-21 00:09:07.381370259 +0000 UTC m=+2.705617277,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:09:23 crc kubenswrapper[5118]: E0121 00:09:23.918228 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188c966cbabee683 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:07.383469699 +0000 UTC m=+2.707716717,LastTimestamp:2026-01-21 00:09:07.383469699 +0000 UTC m=+2.707716717,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:09:23 crc kubenswrapper[5118]: E0121 00:09:23.922583 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188c966cbad9741b openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:07.385209883 +0000 UTC m=+2.709456911,LastTimestamp:2026-01-21 00:09:07.385209883 +0000 UTC m=+2.709456911,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:09:23 crc kubenswrapper[5118]: E0121 00:09:23.927696 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188c966cbb6370b1 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:07.394252977 +0000 UTC m=+2.718499995,LastTimestamp:2026-01-21 00:09:07.394252977 +0000 UTC m=+2.718499995,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:09:23 crc kubenswrapper[5118]: E0121 00:09:23.943423 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188c966cbb9a64c0 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Started,Message:Started container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:07.3978544 +0000 UTC m=+2.722101418,LastTimestamp:2026-01-21 00:09:07.3978544 +0000 UTC m=+2.722101418,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:09:23 crc kubenswrapper[5118]: E0121 00:09:23.949554 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188c966cc7f78cce openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Created,Message:Created container: kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:07.605286094 +0000 UTC m=+2.929533122,LastTimestamp:2026-01-21 00:09:07.605286094 +0000 UTC m=+2.929533122,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:09:23 crc kubenswrapper[5118]: E0121 00:09:23.955472 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188c966cc816c524 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Created,Message:Created container: kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:07.607332132 +0000 UTC m=+2.931579150,LastTimestamp:2026-01-21 00:09:07.607332132 +0000 UTC m=+2.931579150,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:09:23 crc kubenswrapper[5118]: E0121 00:09:23.961186 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188c966cc8b43903 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Started,Message:Started container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:07.617650947 +0000 UTC m=+2.941897965,LastTimestamp:2026-01-21 00:09:07.617650947 +0000 UTC m=+2.941897965,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:09:23 crc kubenswrapper[5118]: E0121 00:09:23.971172 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188c966cc8c70848 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:07.618883656 +0000 UTC m=+2.943130674,LastTimestamp:2026-01-21 00:09:07.618883656 +0000 UTC m=+2.943130674,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:09:23 crc kubenswrapper[5118]: E0121 00:09:23.977606 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188c966cc8f09bcd openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Started,Message:Started container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:07.621608397 +0000 UTC m=+2.945855415,LastTimestamp:2026-01-21 00:09:07.621608397 +0000 UTC m=+2.945855415,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:09:23 crc kubenswrapper[5118]: E0121 00:09:23.987005 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188c966cc901d491 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:07.622737041 +0000 UTC m=+2.946984059,LastTimestamp:2026-01-21 00:09:07.622737041 +0000 UTC m=+2.946984059,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:09:23 crc kubenswrapper[5118]: E0121 00:09:23.993863 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188c966cd6540966 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Created,Message:Created container: kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:07.846228326 +0000 UTC m=+3.170475344,LastTimestamp:2026-01-21 00:09:07.846228326 +0000 UTC m=+3.170475344,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:09:24 crc kubenswrapper[5118]: E0121 00:09:24.000753 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188c966cd66afcf4 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Created,Message:Created container: kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:07.847732468 +0000 UTC m=+3.171979486,LastTimestamp:2026-01-21 00:09:07.847732468 +0000 UTC m=+3.171979486,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:09:24 crc kubenswrapper[5118]: E0121 00:09:24.006030 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188c966cd701a6ee openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Started,Message:Started container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:07.857606382 +0000 UTC m=+3.181853410,LastTimestamp:2026-01-21 00:09:07.857606382 +0000 UTC m=+3.181853410,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:09:24 crc kubenswrapper[5118]: E0121 00:09:24.010329 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188c966cd716163c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:07.858945596 +0000 UTC m=+3.183192614,LastTimestamp:2026-01-21 00:09:07.858945596 +0000 UTC m=+3.183192614,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:09:24 crc kubenswrapper[5118]: E0121 00:09:24.013856 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188c966cd71e062b openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Started,Message:Started container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:07.859465771 +0000 UTC m=+3.183712789,LastTimestamp:2026-01-21 00:09:07.859465771 +0000 UTC m=+3.183712789,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:09:24 crc kubenswrapper[5118]: E0121 00:09:24.017834 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188c966ce042beb4 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:08.012867252 +0000 UTC m=+3.337114270,LastTimestamp:2026-01-21 00:09:08.012867252 +0000 UTC m=+3.337114270,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:09:24 crc kubenswrapper[5118]: E0121 00:09:24.022273 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188c966ce3b4a642 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container: kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:08.070663746 +0000 UTC m=+3.394910784,LastTimestamp:2026-01-21 00:09:08.070663746 +0000 UTC m=+3.394910784,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:09:24 crc kubenswrapper[5118]: E0121 00:09:24.038970 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188c966ce62e8d0f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:08.112207119 +0000 UTC m=+3.436454127,LastTimestamp:2026-01-21 00:09:08.112207119 +0000 UTC m=+3.436454127,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:09:24 crc kubenswrapper[5118]: E0121 00:09:24.044821 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188c966ce649a7d5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:08.113983445 +0000 UTC m=+3.438230463,LastTimestamp:2026-01-21 00:09:08.113983445 +0000 UTC m=+3.438230463,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:09:24 crc kubenswrapper[5118]: E0121 00:09:24.049549 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188c966ced622053 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Created,Message:Created container: etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:08.233027667 +0000 UTC m=+3.557274685,LastTimestamp:2026-01-21 00:09:08.233027667 +0000 UTC m=+3.557274685,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:09:24 crc kubenswrapper[5118]: E0121 00:09:24.053724 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188c966ceefa886a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Started,Message:Started container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:08.259793002 +0000 UTC m=+3.584040030,LastTimestamp:2026-01-21 00:09:08.259793002 +0000 UTC m=+3.584040030,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:09:24 crc kubenswrapper[5118]: E0121 00:09:24.057243 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188c966cf6a4141c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:08.38834486 +0000 UTC m=+3.712591878,LastTimestamp:2026-01-21 00:09:08.38834486 +0000 UTC m=+3.712591878,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:09:24 crc kubenswrapper[5118]: E0121 00:09:24.061508 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188c966cf73b7600 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:08.398265856 +0000 UTC m=+3.722512874,LastTimestamp:2026-01-21 00:09:08.398265856 +0000 UTC m=+3.722512874,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:09:24 crc kubenswrapper[5118]: E0121 00:09:24.066439 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188c966d1cdf74eb openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:09.029770475 +0000 UTC m=+4.354017493,LastTimestamp:2026-01-21 00:09:09.029770475 +0000 UTC m=+4.354017493,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:09:24 crc kubenswrapper[5118]: E0121 00:09:24.070567 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188c966d28d4ab50 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container: etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:09.230390096 +0000 UTC m=+4.554637114,LastTimestamp:2026-01-21 00:09:09.230390096 +0000 UTC m=+4.554637114,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:09:24 crc kubenswrapper[5118]: I0121 00:09:24.070939 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:09:24 crc kubenswrapper[5118]: I0121 00:09:24.071692 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:09:24 crc kubenswrapper[5118]: I0121 00:09:24.071738 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:09:24 crc kubenswrapper[5118]: I0121 00:09:24.071756 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:09:24 crc kubenswrapper[5118]: E0121 00:09:24.072112 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 00:09:24 crc kubenswrapper[5118]: E0121 00:09:24.074378 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188c966d298bcaa5 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:09.242391205 +0000 UTC m=+4.566638223,LastTimestamp:2026-01-21 00:09:09.242391205 +0000 UTC m=+4.566638223,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:09:24 crc kubenswrapper[5118]: E0121 00:09:24.077636 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188c966d29a48741 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:09.244012353 +0000 UTC m=+4.568259371,LastTimestamp:2026-01-21 00:09:09.244012353 +0000 UTC m=+4.568259371,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:09:24 crc kubenswrapper[5118]: E0121 00:09:24.082448 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188c966d39fef62d openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container: etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:09.518374445 +0000 UTC m=+4.842621463,LastTimestamp:2026-01-21 00:09:09.518374445 +0000 UTC m=+4.842621463,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:09:24 crc kubenswrapper[5118]: E0121 00:09:24.086100 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188c966d3ddca2c5 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:09.583233733 +0000 UTC m=+4.907480761,LastTimestamp:2026-01-21 00:09:09.583233733 +0000 UTC m=+4.907480761,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:09:24 crc kubenswrapper[5118]: E0121 00:09:24.089451 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188c966d3def5ffe openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:09.584461822 +0000 UTC m=+4.908708840,LastTimestamp:2026-01-21 00:09:09.584461822 +0000 UTC m=+4.908708840,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:09:24 crc kubenswrapper[5118]: E0121 00:09:24.093984 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188c966d4c754e41 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Created,Message:Created container: etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:09.828120129 +0000 UTC m=+5.152367157,LastTimestamp:2026-01-21 00:09:09.828120129 +0000 UTC m=+5.152367157,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:09:24 crc kubenswrapper[5118]: E0121 00:09:24.097359 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188c966d4d263d73 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Started,Message:Started container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:09.839715699 +0000 UTC m=+5.163962717,LastTimestamp:2026-01-21 00:09:09.839715699 +0000 UTC m=+5.163962717,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:09:24 crc kubenswrapper[5118]: E0121 00:09:24.100635 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188c966d4d3632fb openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:09.840761595 +0000 UTC m=+5.165008613,LastTimestamp:2026-01-21 00:09:09.840761595 +0000 UTC m=+5.165008613,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:09:24 crc kubenswrapper[5118]: E0121 00:09:24.105229 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188c966d6f370f3d openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Created,Message:Created container: etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:10.411243325 +0000 UTC m=+5.735490343,LastTimestamp:2026-01-21 00:09:10.411243325 +0000 UTC m=+5.735490343,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:09:24 crc kubenswrapper[5118]: E0121 00:09:24.109212 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188c966d6ff83ef8 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Started,Message:Started container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:10.423903992 +0000 UTC m=+5.748151000,LastTimestamp:2026-01-21 00:09:10.423903992 +0000 UTC m=+5.748151000,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:09:24 crc kubenswrapper[5118]: E0121 00:09:24.112726 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188c966d7014c7c1 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:10.425774017 +0000 UTC m=+5.750021045,LastTimestamp:2026-01-21 00:09:10.425774017 +0000 UTC m=+5.750021045,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:09:24 crc kubenswrapper[5118]: E0121 00:09:24.115977 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188c966d7b49affb openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Created,Message:Created container: etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:10.613790715 +0000 UTC m=+5.938037723,LastTimestamp:2026-01-21 00:09:10.613790715 +0000 UTC m=+5.938037723,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:09:24 crc kubenswrapper[5118]: E0121 00:09:24.121088 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188c966d7c2d743b openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Started,Message:Started container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:10.628717627 +0000 UTC m=+5.952964645,LastTimestamp:2026-01-21 00:09:10.628717627 +0000 UTC m=+5.952964645,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:09:24 crc kubenswrapper[5118]: E0121 00:09:24.126576 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Jan 21 00:09:24 crc kubenswrapper[5118]: &Event{ObjectMeta:{kube-controller-manager-crc.188c966ea53a9cb0 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://localhost:10357/healthz": context deadline exceeded Jan 21 00:09:24 crc kubenswrapper[5118]: body: Jan 21 00:09:24 crc kubenswrapper[5118]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:15.612413104 +0000 UTC m=+10.936660122,LastTimestamp:2026-01-21 00:09:15.612413104 +0000 UTC m=+10.936660122,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 21 00:09:24 crc kubenswrapper[5118]: > Jan 21 00:09:24 crc kubenswrapper[5118]: E0121 00:09:24.130713 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188c966ea53c59b3 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://localhost:10357/healthz\": context deadline exceeded,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:15.612527027 +0000 UTC m=+10.936774045,LastTimestamp:2026-01-21 00:09:15.612527027 +0000 UTC m=+10.936774045,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:09:24 crc kubenswrapper[5118]: E0121 00:09:24.134848 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Jan 21 00:09:24 crc kubenswrapper[5118]: &Event{ObjectMeta:{kube-apiserver-crc.188c966f590953f1 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Jan 21 00:09:24 crc kubenswrapper[5118]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 21 00:09:24 crc kubenswrapper[5118]: Jan 21 00:09:24 crc kubenswrapper[5118]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:18.629082097 +0000 UTC m=+13.953329115,LastTimestamp:2026-01-21 00:09:18.629082097 +0000 UTC m=+13.953329115,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 21 00:09:24 crc kubenswrapper[5118]: > Jan 21 00:09:24 crc kubenswrapper[5118]: E0121 00:09:24.140529 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188c966f5909fbc2 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:18.629125058 +0000 UTC m=+13.953372076,LastTimestamp:2026-01-21 00:09:18.629125058 +0000 UTC m=+13.953372076,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:09:24 crc kubenswrapper[5118]: E0121 00:09:24.145029 5118 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188c966f590953f1\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Jan 21 00:09:24 crc kubenswrapper[5118]: &Event{ObjectMeta:{kube-apiserver-crc.188c966f590953f1 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Jan 21 00:09:24 crc kubenswrapper[5118]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 21 00:09:24 crc kubenswrapper[5118]: Jan 21 00:09:24 crc kubenswrapper[5118]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:18.629082097 +0000 UTC m=+13.953329115,LastTimestamp:2026-01-21 00:09:18.63520465 +0000 UTC m=+13.959451668,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 21 00:09:24 crc kubenswrapper[5118]: > Jan 21 00:09:24 crc kubenswrapper[5118]: E0121 00:09:24.149861 5118 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188c966f5909fbc2\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188c966f5909fbc2 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:18.629125058 +0000 UTC m=+13.953372076,LastTimestamp:2026-01-21 00:09:18.635260261 +0000 UTC m=+13.959507289,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:09:24 crc kubenswrapper[5118]: I0121 00:09:24.150842 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 00:09:24 crc kubenswrapper[5118]: I0121 00:09:24.151021 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:09:24 crc kubenswrapper[5118]: I0121 00:09:24.152533 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:09:24 crc kubenswrapper[5118]: I0121 00:09:24.152560 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:09:24 crc kubenswrapper[5118]: I0121 00:09:24.152571 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:09:24 crc kubenswrapper[5118]: E0121 00:09:24.152806 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 00:09:24 crc kubenswrapper[5118]: I0121 00:09:24.156171 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 00:09:24 crc kubenswrapper[5118]: E0121 00:09:24.156261 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Jan 21 00:09:24 crc kubenswrapper[5118]: &Event{ObjectMeta:{kube-apiserver-crc.188c96707582c4c9 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:17697/healthz": dial tcp 192.168.126.11:17697: connect: connection refused Jan 21 00:09:24 crc kubenswrapper[5118]: body: Jan 21 00:09:24 crc kubenswrapper[5118]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:23.401770185 +0000 UTC m=+18.726017243,LastTimestamp:2026-01-21 00:09:23.401770185 +0000 UTC m=+18.726017243,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 21 00:09:24 crc kubenswrapper[5118]: > Jan 21 00:09:24 crc kubenswrapper[5118]: I0121 00:09:24.156446 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 00:09:24 crc kubenswrapper[5118]: E0121 00:09:24.160378 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188c96707583b574 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:23.401831796 +0000 UTC m=+18.726078854,LastTimestamp:2026-01-21 00:09:23.401831796 +0000 UTC m=+18.726078854,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:09:24 crc kubenswrapper[5118]: E0121 00:09:24.165196 5118 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188c96707582c4c9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Jan 21 00:09:24 crc kubenswrapper[5118]: &Event{ObjectMeta:{kube-apiserver-crc.188c96707582c4c9 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:17697/healthz": dial tcp 192.168.126.11:17697: connect: connection refused Jan 21 00:09:24 crc kubenswrapper[5118]: body: Jan 21 00:09:24 crc kubenswrapper[5118]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:23.401770185 +0000 UTC m=+18.726017243,LastTimestamp:2026-01-21 00:09:23.411919005 +0000 UTC m=+18.736166033,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 21 00:09:24 crc kubenswrapper[5118]: > Jan 21 00:09:24 crc kubenswrapper[5118]: E0121 00:09:24.171124 5118 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188c96707583b574\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188c96707583b574 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:23.401831796 +0000 UTC m=+18.726078854,LastTimestamp:2026-01-21 00:09:23.411951485 +0000 UTC m=+18.736198513,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:09:24 crc kubenswrapper[5118]: I0121 00:09:24.408266 5118 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:56466->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 21 00:09:24 crc kubenswrapper[5118]: I0121 00:09:24.408338 5118 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:56466->192.168.126.11:17697: read: connection reset by peer" Jan 21 00:09:24 crc kubenswrapper[5118]: E0121 00:09:24.413006 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Jan 21 00:09:24 crc kubenswrapper[5118]: &Event{ObjectMeta:{kube-apiserver-crc.188c9670b181634f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:17697/healthz": read tcp 192.168.126.11:56466->192.168.126.11:17697: read: connection reset by peer Jan 21 00:09:24 crc kubenswrapper[5118]: body: Jan 21 00:09:24 crc kubenswrapper[5118]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:24.408312655 +0000 UTC m=+19.732559683,LastTimestamp:2026-01-21 00:09:24.408312655 +0000 UTC m=+19.732559683,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 21 00:09:24 crc kubenswrapper[5118]: > Jan 21 00:09:24 crc kubenswrapper[5118]: E0121 00:09:24.417914 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188c9670b18210a3 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:56466->192.168.126.11:17697: read: connection reset by peer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:24.408357027 +0000 UTC m=+19.732604045,LastTimestamp:2026-01-21 00:09:24.408357027 +0000 UTC m=+19.732604045,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:09:24 crc kubenswrapper[5118]: I0121 00:09:24.878303 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 00:09:25 crc kubenswrapper[5118]: E0121 00:09:25.017624 5118 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 21 00:09:25 crc kubenswrapper[5118]: I0121 00:09:25.075626 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Jan 21 00:09:25 crc kubenswrapper[5118]: I0121 00:09:25.077371 5118 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="46e3f4d2281defbe831ecaac3f2191effc8d95433fd22da93fd2bf2660080b7d" exitCode=255 Jan 21 00:09:25 crc kubenswrapper[5118]: I0121 00:09:25.077491 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"46e3f4d2281defbe831ecaac3f2191effc8d95433fd22da93fd2bf2660080b7d"} Jan 21 00:09:25 crc kubenswrapper[5118]: I0121 00:09:25.077566 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:09:25 crc kubenswrapper[5118]: I0121 00:09:25.077796 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:09:25 crc kubenswrapper[5118]: I0121 00:09:25.078070 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:09:25 crc kubenswrapper[5118]: I0121 00:09:25.078097 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:09:25 crc kubenswrapper[5118]: I0121 00:09:25.078105 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:09:25 crc kubenswrapper[5118]: E0121 00:09:25.078350 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 00:09:25 crc kubenswrapper[5118]: I0121 00:09:25.078630 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:09:25 crc kubenswrapper[5118]: I0121 00:09:25.078661 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:09:25 crc kubenswrapper[5118]: I0121 00:09:25.078671 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:09:25 crc kubenswrapper[5118]: E0121 00:09:25.078923 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 00:09:25 crc kubenswrapper[5118]: I0121 00:09:25.079178 5118 scope.go:117] "RemoveContainer" containerID="46e3f4d2281defbe831ecaac3f2191effc8d95433fd22da93fd2bf2660080b7d" Jan 21 00:09:25 crc kubenswrapper[5118]: E0121 00:09:25.085466 5118 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188c966ce649a7d5\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188c966ce649a7d5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:08.113983445 +0000 UTC m=+3.438230463,LastTimestamp:2026-01-21 00:09:25.080027531 +0000 UTC m=+20.404274559,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:09:25 crc kubenswrapper[5118]: E0121 00:09:25.300974 5118 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188c966cf6a4141c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188c966cf6a4141c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:08.38834486 +0000 UTC m=+3.712591878,LastTimestamp:2026-01-21 00:09:25.292393709 +0000 UTC m=+20.616640727,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:09:25 crc kubenswrapper[5118]: E0121 00:09:25.351380 5118 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188c966cf73b7600\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188c966cf73b7600 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:08.398265856 +0000 UTC m=+3.722512874,LastTimestamp:2026-01-21 00:09:25.343779926 +0000 UTC m=+20.668026954,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:09:25 crc kubenswrapper[5118]: I0121 00:09:25.858121 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 21 00:09:25 crc kubenswrapper[5118]: I0121 00:09:25.858386 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:09:25 crc kubenswrapper[5118]: I0121 00:09:25.859130 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:09:25 crc kubenswrapper[5118]: I0121 00:09:25.859269 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:09:25 crc kubenswrapper[5118]: I0121 00:09:25.859356 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:09:25 crc kubenswrapper[5118]: E0121 00:09:25.859853 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 00:09:25 crc kubenswrapper[5118]: I0121 00:09:25.878150 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 21 00:09:25 crc kubenswrapper[5118]: I0121 00:09:25.879059 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 00:09:26 crc kubenswrapper[5118]: I0121 00:09:26.081295 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Jan 21 00:09:26 crc kubenswrapper[5118]: I0121 00:09:26.083265 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"1b1b6dc299c6d32f77586bb5d935a39c5f355cbdd8ed587eceac38b7e3c76b04"} Jan 21 00:09:26 crc kubenswrapper[5118]: I0121 00:09:26.083347 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:09:26 crc kubenswrapper[5118]: I0121 00:09:26.083486 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:09:26 crc kubenswrapper[5118]: I0121 00:09:26.083486 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:09:26 crc kubenswrapper[5118]: I0121 00:09:26.083896 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:09:26 crc kubenswrapper[5118]: I0121 00:09:26.083939 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:09:26 crc kubenswrapper[5118]: I0121 00:09:26.083955 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:09:26 crc kubenswrapper[5118]: I0121 00:09:26.084047 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:09:26 crc kubenswrapper[5118]: I0121 00:09:26.084075 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:09:26 crc kubenswrapper[5118]: I0121 00:09:26.084088 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:09:26 crc kubenswrapper[5118]: I0121 00:09:26.084147 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:09:26 crc kubenswrapper[5118]: I0121 00:09:26.084184 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:09:26 crc kubenswrapper[5118]: I0121 00:09:26.084194 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:09:26 crc kubenswrapper[5118]: E0121 00:09:26.084468 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 00:09:26 crc kubenswrapper[5118]: E0121 00:09:26.084652 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 00:09:26 crc kubenswrapper[5118]: E0121 00:09:26.084853 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 00:09:26 crc kubenswrapper[5118]: I0121 00:09:26.878506 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 00:09:27 crc kubenswrapper[5118]: I0121 00:09:27.086986 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Jan 21 00:09:27 crc kubenswrapper[5118]: I0121 00:09:27.087722 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Jan 21 00:09:27 crc kubenswrapper[5118]: I0121 00:09:27.089440 5118 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="1b1b6dc299c6d32f77586bb5d935a39c5f355cbdd8ed587eceac38b7e3c76b04" exitCode=255 Jan 21 00:09:27 crc kubenswrapper[5118]: I0121 00:09:27.089494 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"1b1b6dc299c6d32f77586bb5d935a39c5f355cbdd8ed587eceac38b7e3c76b04"} Jan 21 00:09:27 crc kubenswrapper[5118]: I0121 00:09:27.089527 5118 scope.go:117] "RemoveContainer" containerID="46e3f4d2281defbe831ecaac3f2191effc8d95433fd22da93fd2bf2660080b7d" Jan 21 00:09:27 crc kubenswrapper[5118]: I0121 00:09:27.089761 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:09:27 crc kubenswrapper[5118]: I0121 00:09:27.090433 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:09:27 crc kubenswrapper[5118]: I0121 00:09:27.090465 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:09:27 crc kubenswrapper[5118]: I0121 00:09:27.090477 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:09:27 crc kubenswrapper[5118]: E0121 00:09:27.090942 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 00:09:27 crc kubenswrapper[5118]: I0121 00:09:27.091442 5118 scope.go:117] "RemoveContainer" containerID="1b1b6dc299c6d32f77586bb5d935a39c5f355cbdd8ed587eceac38b7e3c76b04" Jan 21 00:09:27 crc kubenswrapper[5118]: E0121 00:09:27.091677 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 21 00:09:27 crc kubenswrapper[5118]: E0121 00:09:27.096371 5118 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188c96715171ac2c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:27.091637292 +0000 UTC m=+22.415884310,LastTimestamp:2026-01-21 00:09:27.091637292 +0000 UTC m=+22.415884310,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:09:27 crc kubenswrapper[5118]: E0121 00:09:27.510817 5118 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 21 00:09:27 crc kubenswrapper[5118]: I0121 00:09:27.739412 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:09:27 crc kubenswrapper[5118]: I0121 00:09:27.740282 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:09:27 crc kubenswrapper[5118]: I0121 00:09:27.740371 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:09:27 crc kubenswrapper[5118]: I0121 00:09:27.740384 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:09:27 crc kubenswrapper[5118]: I0121 00:09:27.740405 5118 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 21 00:09:27 crc kubenswrapper[5118]: E0121 00:09:27.750391 5118 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 21 00:09:27 crc kubenswrapper[5118]: I0121 00:09:27.880300 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 00:09:28 crc kubenswrapper[5118]: E0121 00:09:28.036315 5118 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 21 00:09:28 crc kubenswrapper[5118]: I0121 00:09:28.095109 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Jan 21 00:09:28 crc kubenswrapper[5118]: E0121 00:09:28.306373 5118 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 21 00:09:28 crc kubenswrapper[5118]: I0121 00:09:28.878873 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 00:09:29 crc kubenswrapper[5118]: I0121 00:09:29.882884 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 00:09:30 crc kubenswrapper[5118]: I0121 00:09:30.879073 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 00:09:30 crc kubenswrapper[5118]: E0121 00:09:30.893288 5118 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 21 00:09:31 crc kubenswrapper[5118]: I0121 00:09:31.879559 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 00:09:32 crc kubenswrapper[5118]: I0121 00:09:32.882194 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 00:09:33 crc kubenswrapper[5118]: E0121 00:09:33.552976 5118 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 21 00:09:33 crc kubenswrapper[5118]: I0121 00:09:33.878968 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 00:09:34 crc kubenswrapper[5118]: E0121 00:09:34.517073 5118 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 21 00:09:34 crc kubenswrapper[5118]: I0121 00:09:34.751407 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:09:34 crc kubenswrapper[5118]: I0121 00:09:34.752316 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:09:34 crc kubenswrapper[5118]: I0121 00:09:34.752357 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:09:34 crc kubenswrapper[5118]: I0121 00:09:34.752370 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:09:34 crc kubenswrapper[5118]: I0121 00:09:34.752392 5118 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 21 00:09:34 crc kubenswrapper[5118]: E0121 00:09:34.762396 5118 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 21 00:09:34 crc kubenswrapper[5118]: I0121 00:09:34.880643 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 00:09:35 crc kubenswrapper[5118]: E0121 00:09:35.017806 5118 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 21 00:09:35 crc kubenswrapper[5118]: E0121 00:09:35.657980 5118 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 21 00:09:35 crc kubenswrapper[5118]: I0121 00:09:35.881958 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 00:09:36 crc kubenswrapper[5118]: I0121 00:09:36.084568 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 00:09:36 crc kubenswrapper[5118]: I0121 00:09:36.085066 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:09:36 crc kubenswrapper[5118]: I0121 00:09:36.086031 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:09:36 crc kubenswrapper[5118]: I0121 00:09:36.086103 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:09:36 crc kubenswrapper[5118]: I0121 00:09:36.086131 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:09:36 crc kubenswrapper[5118]: E0121 00:09:36.086768 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 00:09:36 crc kubenswrapper[5118]: I0121 00:09:36.087306 5118 scope.go:117] "RemoveContainer" containerID="1b1b6dc299c6d32f77586bb5d935a39c5f355cbdd8ed587eceac38b7e3c76b04" Jan 21 00:09:36 crc kubenswrapper[5118]: E0121 00:09:36.087769 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 21 00:09:36 crc kubenswrapper[5118]: E0121 00:09:36.097620 5118 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188c96715171ac2c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188c96715171ac2c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:27.091637292 +0000 UTC m=+22.415884310,LastTimestamp:2026-01-21 00:09:36.087670696 +0000 UTC m=+31.411917754,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:09:36 crc kubenswrapper[5118]: I0121 00:09:36.879071 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 00:09:36 crc kubenswrapper[5118]: I0121 00:09:36.901527 5118 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 00:09:36 crc kubenswrapper[5118]: I0121 00:09:36.901931 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:09:36 crc kubenswrapper[5118]: I0121 00:09:36.903221 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:09:36 crc kubenswrapper[5118]: I0121 00:09:36.903507 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:09:36 crc kubenswrapper[5118]: I0121 00:09:36.903735 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:09:36 crc kubenswrapper[5118]: E0121 00:09:36.904710 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 00:09:36 crc kubenswrapper[5118]: I0121 00:09:36.905385 5118 scope.go:117] "RemoveContainer" containerID="1b1b6dc299c6d32f77586bb5d935a39c5f355cbdd8ed587eceac38b7e3c76b04" Jan 21 00:09:36 crc kubenswrapper[5118]: E0121 00:09:36.915251 5118 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188c966ce649a7d5\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188c966ce649a7d5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:08.113983445 +0000 UTC m=+3.438230463,LastTimestamp:2026-01-21 00:09:36.907387858 +0000 UTC m=+32.231634916,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:09:37 crc kubenswrapper[5118]: I0121 00:09:37.881447 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 00:09:38 crc kubenswrapper[5118]: E0121 00:09:38.685541 5118 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 21 00:09:38 crc kubenswrapper[5118]: I0121 00:09:38.879302 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 00:09:39 crc kubenswrapper[5118]: I0121 00:09:39.885751 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 00:09:40 crc kubenswrapper[5118]: I0121 00:09:40.878128 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 00:09:41 crc kubenswrapper[5118]: E0121 00:09:41.270220 5118 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188c966cf6a4141c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188c966cf6a4141c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:08.38834486 +0000 UTC m=+3.712591878,LastTimestamp:2026-01-21 00:09:41.263942746 +0000 UTC m=+36.588189774,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:09:41 crc kubenswrapper[5118]: E0121 00:09:41.370491 5118 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188c966cf73b7600\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188c966cf73b7600 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:08.398265856 +0000 UTC m=+3.722512874,LastTimestamp:2026-01-21 00:09:41.365391104 +0000 UTC m=+36.689638152,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:09:41 crc kubenswrapper[5118]: E0121 00:09:41.524899 5118 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 21 00:09:41 crc kubenswrapper[5118]: I0121 00:09:41.762538 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:09:41 crc kubenswrapper[5118]: I0121 00:09:41.763509 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:09:41 crc kubenswrapper[5118]: I0121 00:09:41.763547 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:09:41 crc kubenswrapper[5118]: I0121 00:09:41.763559 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:09:41 crc kubenswrapper[5118]: I0121 00:09:41.763593 5118 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 21 00:09:41 crc kubenswrapper[5118]: E0121 00:09:41.772240 5118 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 21 00:09:41 crc kubenswrapper[5118]: I0121 00:09:41.878751 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 00:09:42 crc kubenswrapper[5118]: I0121 00:09:42.131443 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Jan 21 00:09:42 crc kubenswrapper[5118]: I0121 00:09:42.134480 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"6bb1eac04f935f051e7899ff153b3815a705148174dd8ea6f94d003d172a0a44"} Jan 21 00:09:42 crc kubenswrapper[5118]: I0121 00:09:42.134775 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:09:42 crc kubenswrapper[5118]: I0121 00:09:42.135860 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:09:42 crc kubenswrapper[5118]: I0121 00:09:42.135941 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:09:42 crc kubenswrapper[5118]: I0121 00:09:42.135962 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:09:42 crc kubenswrapper[5118]: E0121 00:09:42.136576 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 00:09:42 crc kubenswrapper[5118]: I0121 00:09:42.878556 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 00:09:43 crc kubenswrapper[5118]: I0121 00:09:43.879252 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 00:09:44 crc kubenswrapper[5118]: I0121 00:09:44.151518 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Jan 21 00:09:44 crc kubenswrapper[5118]: I0121 00:09:44.153063 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Jan 21 00:09:44 crc kubenswrapper[5118]: I0121 00:09:44.156782 5118 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="6bb1eac04f935f051e7899ff153b3815a705148174dd8ea6f94d003d172a0a44" exitCode=255 Jan 21 00:09:44 crc kubenswrapper[5118]: I0121 00:09:44.157186 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"6bb1eac04f935f051e7899ff153b3815a705148174dd8ea6f94d003d172a0a44"} Jan 21 00:09:44 crc kubenswrapper[5118]: I0121 00:09:44.157435 5118 scope.go:117] "RemoveContainer" containerID="1b1b6dc299c6d32f77586bb5d935a39c5f355cbdd8ed587eceac38b7e3c76b04" Jan 21 00:09:44 crc kubenswrapper[5118]: I0121 00:09:44.157664 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:09:44 crc kubenswrapper[5118]: I0121 00:09:44.159240 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:09:44 crc kubenswrapper[5118]: I0121 00:09:44.159326 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:09:44 crc kubenswrapper[5118]: I0121 00:09:44.159347 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:09:44 crc kubenswrapper[5118]: E0121 00:09:44.160344 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 00:09:44 crc kubenswrapper[5118]: I0121 00:09:44.160905 5118 scope.go:117] "RemoveContainer" containerID="6bb1eac04f935f051e7899ff153b3815a705148174dd8ea6f94d003d172a0a44" Jan 21 00:09:44 crc kubenswrapper[5118]: E0121 00:09:44.161299 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 21 00:09:44 crc kubenswrapper[5118]: E0121 00:09:44.170021 5118 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188c96715171ac2c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188c96715171ac2c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:27.091637292 +0000 UTC m=+22.415884310,LastTimestamp:2026-01-21 00:09:44.161240615 +0000 UTC m=+39.485487673,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:09:44 crc kubenswrapper[5118]: I0121 00:09:44.878610 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 00:09:45 crc kubenswrapper[5118]: E0121 00:09:45.018734 5118 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 21 00:09:45 crc kubenswrapper[5118]: I0121 00:09:45.160429 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Jan 21 00:09:45 crc kubenswrapper[5118]: I0121 00:09:45.880732 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 00:09:46 crc kubenswrapper[5118]: I0121 00:09:46.883237 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 00:09:46 crc kubenswrapper[5118]: I0121 00:09:46.901931 5118 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 00:09:46 crc kubenswrapper[5118]: I0121 00:09:46.902427 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:09:46 crc kubenswrapper[5118]: I0121 00:09:46.903892 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:09:46 crc kubenswrapper[5118]: I0121 00:09:46.903957 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:09:46 crc kubenswrapper[5118]: I0121 00:09:46.903984 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:09:46 crc kubenswrapper[5118]: E0121 00:09:46.904639 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 00:09:46 crc kubenswrapper[5118]: I0121 00:09:46.905121 5118 scope.go:117] "RemoveContainer" containerID="6bb1eac04f935f051e7899ff153b3815a705148174dd8ea6f94d003d172a0a44" Jan 21 00:09:46 crc kubenswrapper[5118]: E0121 00:09:46.905572 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 21 00:09:46 crc kubenswrapper[5118]: E0121 00:09:46.913657 5118 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188c96715171ac2c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188c96715171ac2c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:27.091637292 +0000 UTC m=+22.415884310,LastTimestamp:2026-01-21 00:09:46.905503693 +0000 UTC m=+42.229750751,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:09:47 crc kubenswrapper[5118]: I0121 00:09:47.880299 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 00:09:48 crc kubenswrapper[5118]: E0121 00:09:48.534668 5118 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 21 00:09:48 crc kubenswrapper[5118]: E0121 00:09:48.641574 5118 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 21 00:09:48 crc kubenswrapper[5118]: I0121 00:09:48.773236 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:09:48 crc kubenswrapper[5118]: I0121 00:09:48.774539 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:09:48 crc kubenswrapper[5118]: I0121 00:09:48.774596 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:09:48 crc kubenswrapper[5118]: I0121 00:09:48.774615 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:09:48 crc kubenswrapper[5118]: I0121 00:09:48.774646 5118 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 21 00:09:48 crc kubenswrapper[5118]: E0121 00:09:48.784395 5118 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 21 00:09:48 crc kubenswrapper[5118]: I0121 00:09:48.879854 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 00:09:49 crc kubenswrapper[5118]: I0121 00:09:49.879375 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 00:09:50 crc kubenswrapper[5118]: I0121 00:09:50.880701 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 00:09:51 crc kubenswrapper[5118]: E0121 00:09:51.611400 5118 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 21 00:09:51 crc kubenswrapper[5118]: I0121 00:09:51.879413 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 00:09:52 crc kubenswrapper[5118]: I0121 00:09:52.136413 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 00:09:52 crc kubenswrapper[5118]: I0121 00:09:52.136798 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:09:52 crc kubenswrapper[5118]: I0121 00:09:52.137789 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:09:52 crc kubenswrapper[5118]: I0121 00:09:52.137853 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:09:52 crc kubenswrapper[5118]: I0121 00:09:52.137865 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:09:52 crc kubenswrapper[5118]: E0121 00:09:52.138223 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 00:09:52 crc kubenswrapper[5118]: I0121 00:09:52.138478 5118 scope.go:117] "RemoveContainer" containerID="6bb1eac04f935f051e7899ff153b3815a705148174dd8ea6f94d003d172a0a44" Jan 21 00:09:52 crc kubenswrapper[5118]: E0121 00:09:52.138663 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 21 00:09:52 crc kubenswrapper[5118]: E0121 00:09:52.143851 5118 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188c96715171ac2c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188c96715171ac2c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:27.091637292 +0000 UTC m=+22.415884310,LastTimestamp:2026-01-21 00:09:52.138640457 +0000 UTC m=+47.462887475,Count:5,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:09:52 crc kubenswrapper[5118]: I0121 00:09:52.880615 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 00:09:53 crc kubenswrapper[5118]: I0121 00:09:53.880056 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 00:09:53 crc kubenswrapper[5118]: E0121 00:09:53.923011 5118 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 21 00:09:54 crc kubenswrapper[5118]: E0121 00:09:54.671654 5118 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 21 00:09:54 crc kubenswrapper[5118]: I0121 00:09:54.880534 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 00:09:55 crc kubenswrapper[5118]: E0121 00:09:55.019353 5118 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 21 00:09:55 crc kubenswrapper[5118]: E0121 00:09:55.539707 5118 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 21 00:09:55 crc kubenswrapper[5118]: I0121 00:09:55.785255 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:09:55 crc kubenswrapper[5118]: I0121 00:09:55.786991 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:09:55 crc kubenswrapper[5118]: I0121 00:09:55.787041 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:09:55 crc kubenswrapper[5118]: I0121 00:09:55.787053 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:09:55 crc kubenswrapper[5118]: I0121 00:09:55.787079 5118 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 21 00:09:55 crc kubenswrapper[5118]: E0121 00:09:55.800211 5118 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 21 00:09:55 crc kubenswrapper[5118]: I0121 00:09:55.881142 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 00:09:56 crc kubenswrapper[5118]: I0121 00:09:56.881427 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 00:09:57 crc kubenswrapper[5118]: I0121 00:09:57.878590 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 00:09:58 crc kubenswrapper[5118]: I0121 00:09:58.878494 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 00:09:59 crc kubenswrapper[5118]: I0121 00:09:59.880535 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 00:10:00 crc kubenswrapper[5118]: I0121 00:10:00.878441 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 00:10:00 crc kubenswrapper[5118]: I0121 00:10:00.948510 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 00:10:00 crc kubenswrapper[5118]: I0121 00:10:00.948705 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:10:00 crc kubenswrapper[5118]: I0121 00:10:00.949447 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:00 crc kubenswrapper[5118]: I0121 00:10:00.949477 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:00 crc kubenswrapper[5118]: I0121 00:10:00.949489 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:00 crc kubenswrapper[5118]: E0121 00:10:00.949817 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 00:10:01 crc kubenswrapper[5118]: I0121 00:10:01.878863 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 00:10:02 crc kubenswrapper[5118]: E0121 00:10:02.548112 5118 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 21 00:10:02 crc kubenswrapper[5118]: I0121 00:10:02.804352 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:10:02 crc kubenswrapper[5118]: I0121 00:10:02.805337 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:02 crc kubenswrapper[5118]: I0121 00:10:02.805410 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:02 crc kubenswrapper[5118]: I0121 00:10:02.805928 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:02 crc kubenswrapper[5118]: I0121 00:10:02.805972 5118 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 21 00:10:02 crc kubenswrapper[5118]: E0121 00:10:02.818088 5118 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 21 00:10:02 crc kubenswrapper[5118]: I0121 00:10:02.882540 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 00:10:03 crc kubenswrapper[5118]: I0121 00:10:03.878045 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 00:10:04 crc kubenswrapper[5118]: I0121 00:10:04.879905 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 00:10:05 crc kubenswrapper[5118]: E0121 00:10:05.020394 5118 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 21 00:10:05 crc kubenswrapper[5118]: I0121 00:10:05.880266 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 00:10:05 crc kubenswrapper[5118]: I0121 00:10:05.975292 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:10:05 crc kubenswrapper[5118]: I0121 00:10:05.976396 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:05 crc kubenswrapper[5118]: I0121 00:10:05.976459 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:05 crc kubenswrapper[5118]: I0121 00:10:05.976476 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:05 crc kubenswrapper[5118]: E0121 00:10:05.976928 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 00:10:05 crc kubenswrapper[5118]: I0121 00:10:05.977336 5118 scope.go:117] "RemoveContainer" containerID="6bb1eac04f935f051e7899ff153b3815a705148174dd8ea6f94d003d172a0a44" Jan 21 00:10:05 crc kubenswrapper[5118]: E0121 00:10:05.984029 5118 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188c966ce649a7d5\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188c966ce649a7d5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:08.113983445 +0000 UTC m=+3.438230463,LastTimestamp:2026-01-21 00:10:05.978988294 +0000 UTC m=+61.303235312,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:10:06 crc kubenswrapper[5118]: I0121 00:10:06.880005 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 00:10:07 crc kubenswrapper[5118]: I0121 00:10:07.224807 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Jan 21 00:10:07 crc kubenswrapper[5118]: I0121 00:10:07.226654 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"4192feefd5cfa0d044d233324b963c0d52fc469217c4faad785dda349a30a38b"} Jan 21 00:10:07 crc kubenswrapper[5118]: I0121 00:10:07.227020 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:10:07 crc kubenswrapper[5118]: I0121 00:10:07.227795 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:07 crc kubenswrapper[5118]: I0121 00:10:07.227846 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:07 crc kubenswrapper[5118]: I0121 00:10:07.227860 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:07 crc kubenswrapper[5118]: E0121 00:10:07.228312 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 00:10:07 crc kubenswrapper[5118]: I0121 00:10:07.879752 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 00:10:08 crc kubenswrapper[5118]: I0121 00:10:08.231389 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Jan 21 00:10:08 crc kubenswrapper[5118]: I0121 00:10:08.232232 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Jan 21 00:10:08 crc kubenswrapper[5118]: I0121 00:10:08.233994 5118 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="4192feefd5cfa0d044d233324b963c0d52fc469217c4faad785dda349a30a38b" exitCode=255 Jan 21 00:10:08 crc kubenswrapper[5118]: I0121 00:10:08.234032 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"4192feefd5cfa0d044d233324b963c0d52fc469217c4faad785dda349a30a38b"} Jan 21 00:10:08 crc kubenswrapper[5118]: I0121 00:10:08.234078 5118 scope.go:117] "RemoveContainer" containerID="6bb1eac04f935f051e7899ff153b3815a705148174dd8ea6f94d003d172a0a44" Jan 21 00:10:08 crc kubenswrapper[5118]: I0121 00:10:08.234345 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:10:08 crc kubenswrapper[5118]: I0121 00:10:08.235005 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:08 crc kubenswrapper[5118]: I0121 00:10:08.235052 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:08 crc kubenswrapper[5118]: I0121 00:10:08.235071 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:08 crc kubenswrapper[5118]: E0121 00:10:08.235626 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 00:10:08 crc kubenswrapper[5118]: I0121 00:10:08.236006 5118 scope.go:117] "RemoveContainer" containerID="4192feefd5cfa0d044d233324b963c0d52fc469217c4faad785dda349a30a38b" Jan 21 00:10:08 crc kubenswrapper[5118]: E0121 00:10:08.236357 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 21 00:10:08 crc kubenswrapper[5118]: E0121 00:10:08.245651 5118 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188c96715171ac2c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188c96715171ac2c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:09:27.091637292 +0000 UTC m=+22.415884310,LastTimestamp:2026-01-21 00:10:08.236310361 +0000 UTC m=+63.560557419,Count:6,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:10:08 crc kubenswrapper[5118]: I0121 00:10:08.879533 5118 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 00:10:09 crc kubenswrapper[5118]: I0121 00:10:09.237760 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Jan 21 00:10:09 crc kubenswrapper[5118]: E0121 00:10:09.554798 5118 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 21 00:10:09 crc kubenswrapper[5118]: I0121 00:10:09.588248 5118 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-wqqh7" Jan 21 00:10:09 crc kubenswrapper[5118]: I0121 00:10:09.594288 5118 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-wqqh7" Jan 21 00:10:09 crc kubenswrapper[5118]: I0121 00:10:09.615278 5118 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 21 00:10:09 crc kubenswrapper[5118]: I0121 00:10:09.807635 5118 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 21 00:10:09 crc kubenswrapper[5118]: I0121 00:10:09.818590 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:10:09 crc kubenswrapper[5118]: I0121 00:10:09.820421 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:09 crc kubenswrapper[5118]: I0121 00:10:09.820471 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:09 crc kubenswrapper[5118]: I0121 00:10:09.820486 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:09 crc kubenswrapper[5118]: I0121 00:10:09.820597 5118 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 21 00:10:09 crc kubenswrapper[5118]: I0121 00:10:09.832419 5118 kubelet_node_status.go:127] "Node was previously registered" node="crc" Jan 21 00:10:09 crc kubenswrapper[5118]: I0121 00:10:09.832717 5118 kubelet_node_status.go:81] "Successfully registered node" node="crc" Jan 21 00:10:09 crc kubenswrapper[5118]: E0121 00:10:09.832744 5118 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Jan 21 00:10:09 crc kubenswrapper[5118]: I0121 00:10:09.836172 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:09 crc kubenswrapper[5118]: I0121 00:10:09.836212 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:09 crc kubenswrapper[5118]: I0121 00:10:09.836225 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:09 crc kubenswrapper[5118]: I0121 00:10:09.836246 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:09 crc kubenswrapper[5118]: I0121 00:10:09.836259 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:09Z","lastTransitionTime":"2026-01-21T00:10:09Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Jan 21 00:10:09 crc kubenswrapper[5118]: E0121 00:10:09.853824 5118 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400464Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861264Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:10:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:10:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:10:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:10:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:09Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"134a100e-afd8-41bd-8bdc-3d8d9cbfad99\\\",\\\"systemUUID\\\":\\\"78a64d73-f919-4466-a9b9-ec34ac96c5c7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:09 crc kubenswrapper[5118]: I0121 00:10:09.860961 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:09 crc kubenswrapper[5118]: I0121 00:10:09.861017 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:09 crc kubenswrapper[5118]: I0121 00:10:09.861031 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:09 crc kubenswrapper[5118]: I0121 00:10:09.861054 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:09 crc kubenswrapper[5118]: I0121 00:10:09.861066 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:09Z","lastTransitionTime":"2026-01-21T00:10:09Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Jan 21 00:10:09 crc kubenswrapper[5118]: E0121 00:10:09.871144 5118 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400464Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861264Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:10:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:10:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:10:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:10:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:09Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"134a100e-afd8-41bd-8bdc-3d8d9cbfad99\\\",\\\"systemUUID\\\":\\\"78a64d73-f919-4466-a9b9-ec34ac96c5c7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:09 crc kubenswrapper[5118]: I0121 00:10:09.878376 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:09 crc kubenswrapper[5118]: I0121 00:10:09.878409 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:09 crc kubenswrapper[5118]: I0121 00:10:09.878417 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:09 crc kubenswrapper[5118]: I0121 00:10:09.878432 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:09 crc kubenswrapper[5118]: I0121 00:10:09.878442 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:09Z","lastTransitionTime":"2026-01-21T00:10:09Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Jan 21 00:10:09 crc kubenswrapper[5118]: E0121 00:10:09.888900 5118 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400464Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861264Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:10:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:10:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:10:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:10:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:09Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"134a100e-afd8-41bd-8bdc-3d8d9cbfad99\\\",\\\"systemUUID\\\":\\\"78a64d73-f919-4466-a9b9-ec34ac96c5c7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:09 crc kubenswrapper[5118]: I0121 00:10:09.895484 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:09 crc kubenswrapper[5118]: I0121 00:10:09.895542 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:09 crc kubenswrapper[5118]: I0121 00:10:09.895556 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:09 crc kubenswrapper[5118]: I0121 00:10:09.895572 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:09 crc kubenswrapper[5118]: I0121 00:10:09.895583 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:09Z","lastTransitionTime":"2026-01-21T00:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:09 crc kubenswrapper[5118]: E0121 00:10:09.905939 5118 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400464Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861264Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:10:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:10:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:10:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:10:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"134a100e-afd8-41bd-8bdc-3d8d9cbfad99\\\",\\\"systemUUID\\\":\\\"78a64d73-f919-4466-a9b9-ec34ac96c5c7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:09 crc kubenswrapper[5118]: E0121 00:10:09.906100 5118 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Jan 21 00:10:09 crc kubenswrapper[5118]: E0121 00:10:09.906129 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:10 crc kubenswrapper[5118]: E0121 00:10:10.006900 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:10 crc kubenswrapper[5118]: E0121 00:10:10.107118 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:10 crc kubenswrapper[5118]: E0121 00:10:10.208069 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:10 crc kubenswrapper[5118]: E0121 00:10:10.308775 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:10 crc kubenswrapper[5118]: E0121 00:10:10.409314 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:10 crc kubenswrapper[5118]: E0121 00:10:10.509506 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:10 crc kubenswrapper[5118]: I0121 00:10:10.595570 5118 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kube-apiserver-client-kubelet" expiration="2026-02-20 00:05:09 +0000 UTC" deadline="2026-02-13 00:24:33.252124065 +0000 UTC" Jan 21 00:10:10 crc kubenswrapper[5118]: I0121 00:10:10.595674 5118 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kube-apiserver-client-kubelet" sleep="552h14m22.656454853s" Jan 21 00:10:10 crc kubenswrapper[5118]: E0121 00:10:10.609894 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:10 crc kubenswrapper[5118]: E0121 00:10:10.710018 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:10 crc kubenswrapper[5118]: E0121 00:10:10.811065 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:10 crc kubenswrapper[5118]: E0121 00:10:10.911234 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:11 crc kubenswrapper[5118]: E0121 00:10:11.011516 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:11 crc kubenswrapper[5118]: E0121 00:10:11.112589 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:11 crc kubenswrapper[5118]: E0121 00:10:11.213623 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:11 crc kubenswrapper[5118]: E0121 00:10:11.314637 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:11 crc kubenswrapper[5118]: E0121 00:10:11.415150 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:11 crc kubenswrapper[5118]: E0121 00:10:11.516132 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:11 crc kubenswrapper[5118]: E0121 00:10:11.617252 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:11 crc kubenswrapper[5118]: E0121 00:10:11.717596 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:11 crc kubenswrapper[5118]: E0121 00:10:11.818113 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:11 crc kubenswrapper[5118]: E0121 00:10:11.918388 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:12 crc kubenswrapper[5118]: E0121 00:10:12.018954 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:12 crc kubenswrapper[5118]: E0121 00:10:12.119259 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:12 crc kubenswrapper[5118]: E0121 00:10:12.219489 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:12 crc kubenswrapper[5118]: E0121 00:10:12.319948 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:12 crc kubenswrapper[5118]: E0121 00:10:12.420409 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:12 crc kubenswrapper[5118]: E0121 00:10:12.521247 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:12 crc kubenswrapper[5118]: E0121 00:10:12.622116 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:12 crc kubenswrapper[5118]: E0121 00:10:12.722995 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:12 crc kubenswrapper[5118]: E0121 00:10:12.824045 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:12 crc kubenswrapper[5118]: E0121 00:10:12.924616 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:13 crc kubenswrapper[5118]: E0121 00:10:13.025674 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:13 crc kubenswrapper[5118]: E0121 00:10:13.126284 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:13 crc kubenswrapper[5118]: E0121 00:10:13.226989 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:13 crc kubenswrapper[5118]: E0121 00:10:13.327500 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:13 crc kubenswrapper[5118]: E0121 00:10:13.427561 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:13 crc kubenswrapper[5118]: E0121 00:10:13.528592 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:13 crc kubenswrapper[5118]: E0121 00:10:13.629691 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:13 crc kubenswrapper[5118]: E0121 00:10:13.730221 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:13 crc kubenswrapper[5118]: E0121 00:10:13.831031 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:13 crc kubenswrapper[5118]: E0121 00:10:13.931465 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:14 crc kubenswrapper[5118]: E0121 00:10:14.031631 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:14 crc kubenswrapper[5118]: E0121 00:10:14.132646 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:14 crc kubenswrapper[5118]: E0121 00:10:14.233229 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:14 crc kubenswrapper[5118]: E0121 00:10:14.333944 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:14 crc kubenswrapper[5118]: E0121 00:10:14.434905 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:14 crc kubenswrapper[5118]: E0121 00:10:14.535520 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:14 crc kubenswrapper[5118]: E0121 00:10:14.636388 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:14 crc kubenswrapper[5118]: E0121 00:10:14.737320 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:14 crc kubenswrapper[5118]: E0121 00:10:14.837663 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:14 crc kubenswrapper[5118]: E0121 00:10:14.938447 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:14 crc kubenswrapper[5118]: I0121 00:10:14.975396 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:10:14 crc kubenswrapper[5118]: I0121 00:10:14.976177 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:14 crc kubenswrapper[5118]: I0121 00:10:14.976215 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:14 crc kubenswrapper[5118]: I0121 00:10:14.976227 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:14 crc kubenswrapper[5118]: E0121 00:10:14.976569 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 00:10:15 crc kubenswrapper[5118]: E0121 00:10:15.020798 5118 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 21 00:10:15 crc kubenswrapper[5118]: E0121 00:10:15.038889 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:15 crc kubenswrapper[5118]: E0121 00:10:15.139964 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:15 crc kubenswrapper[5118]: E0121 00:10:15.240898 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:15 crc kubenswrapper[5118]: E0121 00:10:15.341265 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:15 crc kubenswrapper[5118]: E0121 00:10:15.442206 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:15 crc kubenswrapper[5118]: E0121 00:10:15.543168 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:15 crc kubenswrapper[5118]: E0121 00:10:15.643535 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:15 crc kubenswrapper[5118]: E0121 00:10:15.744198 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:15 crc kubenswrapper[5118]: E0121 00:10:15.844507 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:15 crc kubenswrapper[5118]: E0121 00:10:15.945002 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:16 crc kubenswrapper[5118]: E0121 00:10:16.046225 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:16 crc kubenswrapper[5118]: E0121 00:10:16.146359 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:16 crc kubenswrapper[5118]: E0121 00:10:16.247479 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:16 crc kubenswrapper[5118]: E0121 00:10:16.347801 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:16 crc kubenswrapper[5118]: E0121 00:10:16.448924 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:16 crc kubenswrapper[5118]: E0121 00:10:16.549890 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:16 crc kubenswrapper[5118]: E0121 00:10:16.650955 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:16 crc kubenswrapper[5118]: E0121 00:10:16.751113 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:16 crc kubenswrapper[5118]: E0121 00:10:16.852101 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:16 crc kubenswrapper[5118]: I0121 00:10:16.901467 5118 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 00:10:16 crc kubenswrapper[5118]: I0121 00:10:16.901989 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:10:16 crc kubenswrapper[5118]: I0121 00:10:16.902912 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:16 crc kubenswrapper[5118]: I0121 00:10:16.902964 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:16 crc kubenswrapper[5118]: I0121 00:10:16.902980 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:16 crc kubenswrapper[5118]: E0121 00:10:16.903500 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 00:10:16 crc kubenswrapper[5118]: I0121 00:10:16.903759 5118 scope.go:117] "RemoveContainer" containerID="4192feefd5cfa0d044d233324b963c0d52fc469217c4faad785dda349a30a38b" Jan 21 00:10:16 crc kubenswrapper[5118]: E0121 00:10:16.903979 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 21 00:10:16 crc kubenswrapper[5118]: E0121 00:10:16.953507 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:17 crc kubenswrapper[5118]: E0121 00:10:17.054132 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:17 crc kubenswrapper[5118]: E0121 00:10:17.155090 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:17 crc kubenswrapper[5118]: I0121 00:10:17.227307 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 00:10:17 crc kubenswrapper[5118]: E0121 00:10:17.255606 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:17 crc kubenswrapper[5118]: I0121 00:10:17.257202 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:10:17 crc kubenswrapper[5118]: I0121 00:10:17.257821 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:17 crc kubenswrapper[5118]: I0121 00:10:17.257848 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:17 crc kubenswrapper[5118]: I0121 00:10:17.257893 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:17 crc kubenswrapper[5118]: E0121 00:10:17.258315 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 00:10:17 crc kubenswrapper[5118]: I0121 00:10:17.258559 5118 scope.go:117] "RemoveContainer" containerID="4192feefd5cfa0d044d233324b963c0d52fc469217c4faad785dda349a30a38b" Jan 21 00:10:17 crc kubenswrapper[5118]: E0121 00:10:17.258785 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 21 00:10:17 crc kubenswrapper[5118]: E0121 00:10:17.356311 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:17 crc kubenswrapper[5118]: E0121 00:10:17.457441 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:17 crc kubenswrapper[5118]: E0121 00:10:17.558295 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:17 crc kubenswrapper[5118]: E0121 00:10:17.659262 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:17 crc kubenswrapper[5118]: E0121 00:10:17.760150 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:17 crc kubenswrapper[5118]: E0121 00:10:17.860993 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:17 crc kubenswrapper[5118]: E0121 00:10:17.961238 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:18 crc kubenswrapper[5118]: E0121 00:10:18.062191 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:18 crc kubenswrapper[5118]: E0121 00:10:18.162593 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:18 crc kubenswrapper[5118]: E0121 00:10:18.263257 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:18 crc kubenswrapper[5118]: E0121 00:10:18.364011 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:18 crc kubenswrapper[5118]: E0121 00:10:18.465146 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:18 crc kubenswrapper[5118]: E0121 00:10:18.565540 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:18 crc kubenswrapper[5118]: E0121 00:10:18.666616 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:18 crc kubenswrapper[5118]: E0121 00:10:18.767644 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:18 crc kubenswrapper[5118]: E0121 00:10:18.868150 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:18 crc kubenswrapper[5118]: E0121 00:10:18.968617 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:19 crc kubenswrapper[5118]: E0121 00:10:19.068870 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:19 crc kubenswrapper[5118]: E0121 00:10:19.169322 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:19 crc kubenswrapper[5118]: E0121 00:10:19.270060 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:19 crc kubenswrapper[5118]: E0121 00:10:19.370504 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:19 crc kubenswrapper[5118]: E0121 00:10:19.471716 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:19 crc kubenswrapper[5118]: E0121 00:10:19.571866 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:19 crc kubenswrapper[5118]: E0121 00:10:19.672977 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:19 crc kubenswrapper[5118]: E0121 00:10:19.773121 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:19 crc kubenswrapper[5118]: E0121 00:10:19.873246 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:19 crc kubenswrapper[5118]: E0121 00:10:19.973892 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:20 crc kubenswrapper[5118]: E0121 00:10:20.074718 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:20 crc kubenswrapper[5118]: E0121 00:10:20.079025 5118 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Jan 21 00:10:20 crc kubenswrapper[5118]: I0121 00:10:20.083295 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:20 crc kubenswrapper[5118]: I0121 00:10:20.083361 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:20 crc kubenswrapper[5118]: I0121 00:10:20.083381 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:20 crc kubenswrapper[5118]: I0121 00:10:20.083407 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:20 crc kubenswrapper[5118]: I0121 00:10:20.083425 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:20Z","lastTransitionTime":"2026-01-21T00:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:20 crc kubenswrapper[5118]: E0121 00:10:20.097681 5118 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400464Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861264Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:10:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:10:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:10:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:10:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"134a100e-afd8-41bd-8bdc-3d8d9cbfad99\\\",\\\"systemUUID\\\":\\\"78a64d73-f919-4466-a9b9-ec34ac96c5c7\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:20 crc kubenswrapper[5118]: I0121 00:10:20.104519 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:20 crc kubenswrapper[5118]: I0121 00:10:20.104568 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:20 crc kubenswrapper[5118]: I0121 00:10:20.104582 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:20 crc kubenswrapper[5118]: I0121 00:10:20.104600 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:20 crc kubenswrapper[5118]: I0121 00:10:20.104617 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:20Z","lastTransitionTime":"2026-01-21T00:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:20 crc kubenswrapper[5118]: E0121 00:10:20.117278 5118 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400464Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861264Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:10:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:10:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:10:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:10:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"134a100e-afd8-41bd-8bdc-3d8d9cbfad99\\\",\\\"systemUUID\\\":\\\"78a64d73-f919-4466-a9b9-ec34ac96c5c7\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:20 crc kubenswrapper[5118]: I0121 00:10:20.124084 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:20 crc kubenswrapper[5118]: I0121 00:10:20.124179 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:20 crc kubenswrapper[5118]: I0121 00:10:20.124191 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:20 crc kubenswrapper[5118]: I0121 00:10:20.124205 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:20 crc kubenswrapper[5118]: I0121 00:10:20.124214 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:20Z","lastTransitionTime":"2026-01-21T00:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:20 crc kubenswrapper[5118]: E0121 00:10:20.133544 5118 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400464Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861264Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:10:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:10:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:10:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:10:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"134a100e-afd8-41bd-8bdc-3d8d9cbfad99\\\",\\\"systemUUID\\\":\\\"78a64d73-f919-4466-a9b9-ec34ac96c5c7\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:20 crc kubenswrapper[5118]: I0121 00:10:20.139454 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:20 crc kubenswrapper[5118]: I0121 00:10:20.139511 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:20 crc kubenswrapper[5118]: I0121 00:10:20.139524 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:20 crc kubenswrapper[5118]: I0121 00:10:20.139541 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:20 crc kubenswrapper[5118]: I0121 00:10:20.139552 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:20Z","lastTransitionTime":"2026-01-21T00:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:20 crc kubenswrapper[5118]: E0121 00:10:20.149025 5118 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400464Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861264Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:10:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:10:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:10:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:10:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"134a100e-afd8-41bd-8bdc-3d8d9cbfad99\\\",\\\"systemUUID\\\":\\\"78a64d73-f919-4466-a9b9-ec34ac96c5c7\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:20 crc kubenswrapper[5118]: E0121 00:10:20.149260 5118 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Jan 21 00:10:20 crc kubenswrapper[5118]: E0121 00:10:20.174873 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:20 crc kubenswrapper[5118]: E0121 00:10:20.275251 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:20 crc kubenswrapper[5118]: E0121 00:10:20.375350 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:20 crc kubenswrapper[5118]: E0121 00:10:20.476302 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:20 crc kubenswrapper[5118]: E0121 00:10:20.577344 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:20 crc kubenswrapper[5118]: E0121 00:10:20.678330 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:20 crc kubenswrapper[5118]: E0121 00:10:20.779031 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:20 crc kubenswrapper[5118]: E0121 00:10:20.880101 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:20 crc kubenswrapper[5118]: E0121 00:10:20.980660 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:21 crc kubenswrapper[5118]: E0121 00:10:21.080836 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:21 crc kubenswrapper[5118]: E0121 00:10:21.181173 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:21 crc kubenswrapper[5118]: E0121 00:10:21.281935 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:21 crc kubenswrapper[5118]: E0121 00:10:21.382254 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:21 crc kubenswrapper[5118]: E0121 00:10:21.482703 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:21 crc kubenswrapper[5118]: E0121 00:10:21.583683 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:21 crc kubenswrapper[5118]: E0121 00:10:21.684128 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:21 crc kubenswrapper[5118]: E0121 00:10:21.784968 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:21 crc kubenswrapper[5118]: E0121 00:10:21.885420 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:21 crc kubenswrapper[5118]: E0121 00:10:21.985943 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:22 crc kubenswrapper[5118]: E0121 00:10:22.087233 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:22 crc kubenswrapper[5118]: E0121 00:10:22.188196 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:22 crc kubenswrapper[5118]: E0121 00:10:22.288827 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:22 crc kubenswrapper[5118]: E0121 00:10:22.389387 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:22 crc kubenswrapper[5118]: E0121 00:10:22.490291 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:22 crc kubenswrapper[5118]: E0121 00:10:22.591301 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:22 crc kubenswrapper[5118]: E0121 00:10:22.691875 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:22 crc kubenswrapper[5118]: E0121 00:10:22.792247 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:22 crc kubenswrapper[5118]: E0121 00:10:22.892652 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:22 crc kubenswrapper[5118]: E0121 00:10:22.993238 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:23 crc kubenswrapper[5118]: E0121 00:10:23.093390 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:23 crc kubenswrapper[5118]: E0121 00:10:23.194381 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:23 crc kubenswrapper[5118]: E0121 00:10:23.294799 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:23 crc kubenswrapper[5118]: E0121 00:10:23.426879 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:23 crc kubenswrapper[5118]: E0121 00:10:23.527324 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:23 crc kubenswrapper[5118]: E0121 00:10:23.628318 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:23 crc kubenswrapper[5118]: E0121 00:10:23.735823 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:23 crc kubenswrapper[5118]: E0121 00:10:23.836937 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:23 crc kubenswrapper[5118]: E0121 00:10:23.937374 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:24 crc kubenswrapper[5118]: E0121 00:10:24.037738 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:24 crc kubenswrapper[5118]: E0121 00:10:24.137833 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:24 crc kubenswrapper[5118]: E0121 00:10:24.238517 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:24 crc kubenswrapper[5118]: E0121 00:10:24.339413 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:24 crc kubenswrapper[5118]: E0121 00:10:24.439706 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:24 crc kubenswrapper[5118]: E0121 00:10:24.540917 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:24 crc kubenswrapper[5118]: E0121 00:10:24.641060 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:24 crc kubenswrapper[5118]: E0121 00:10:24.742182 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:24 crc kubenswrapper[5118]: E0121 00:10:24.843282 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:24 crc kubenswrapper[5118]: E0121 00:10:24.944224 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:25 crc kubenswrapper[5118]: E0121 00:10:25.021337 5118 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 21 00:10:25 crc kubenswrapper[5118]: E0121 00:10:25.044679 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:25 crc kubenswrapper[5118]: E0121 00:10:25.145348 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:25 crc kubenswrapper[5118]: E0121 00:10:25.246292 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:25 crc kubenswrapper[5118]: E0121 00:10:25.346635 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:25 crc kubenswrapper[5118]: E0121 00:10:25.447738 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:25 crc kubenswrapper[5118]: E0121 00:10:25.548937 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:25 crc kubenswrapper[5118]: E0121 00:10:25.649599 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:25 crc kubenswrapper[5118]: E0121 00:10:25.749848 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:25 crc kubenswrapper[5118]: E0121 00:10:25.850888 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:25 crc kubenswrapper[5118]: E0121 00:10:25.951205 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:26 crc kubenswrapper[5118]: E0121 00:10:26.051515 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:26 crc kubenswrapper[5118]: E0121 00:10:26.151747 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:26 crc kubenswrapper[5118]: E0121 00:10:26.252100 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:26 crc kubenswrapper[5118]: E0121 00:10:26.353092 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:26 crc kubenswrapper[5118]: E0121 00:10:26.453836 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:26 crc kubenswrapper[5118]: E0121 00:10:26.554879 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:26 crc kubenswrapper[5118]: E0121 00:10:26.655105 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:26 crc kubenswrapper[5118]: E0121 00:10:26.756124 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:26 crc kubenswrapper[5118]: E0121 00:10:26.856912 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:26 crc kubenswrapper[5118]: E0121 00:10:26.957717 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:27 crc kubenswrapper[5118]: E0121 00:10:27.058197 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:27 crc kubenswrapper[5118]: E0121 00:10:27.159109 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:27 crc kubenswrapper[5118]: E0121 00:10:27.259933 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:27 crc kubenswrapper[5118]: E0121 00:10:27.360272 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:27 crc kubenswrapper[5118]: E0121 00:10:27.461028 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:27 crc kubenswrapper[5118]: E0121 00:10:27.562125 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:27 crc kubenswrapper[5118]: E0121 00:10:27.662970 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:27 crc kubenswrapper[5118]: E0121 00:10:27.763339 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:27 crc kubenswrapper[5118]: E0121 00:10:27.863742 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:27 crc kubenswrapper[5118]: E0121 00:10:27.963824 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:28 crc kubenswrapper[5118]: E0121 00:10:28.064377 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:28 crc kubenswrapper[5118]: E0121 00:10:28.165513 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:28 crc kubenswrapper[5118]: E0121 00:10:28.266244 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:28 crc kubenswrapper[5118]: E0121 00:10:28.367241 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:28 crc kubenswrapper[5118]: E0121 00:10:28.468315 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:28 crc kubenswrapper[5118]: E0121 00:10:28.569021 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:28 crc kubenswrapper[5118]: E0121 00:10:28.670059 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:28 crc kubenswrapper[5118]: E0121 00:10:28.770830 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:28 crc kubenswrapper[5118]: E0121 00:10:28.871399 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:28 crc kubenswrapper[5118]: E0121 00:10:28.972212 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:29 crc kubenswrapper[5118]: E0121 00:10:29.072413 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:29 crc kubenswrapper[5118]: E0121 00:10:29.173072 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:29 crc kubenswrapper[5118]: E0121 00:10:29.273834 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:29 crc kubenswrapper[5118]: E0121 00:10:29.374334 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:29 crc kubenswrapper[5118]: E0121 00:10:29.475548 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:29 crc kubenswrapper[5118]: E0121 00:10:29.576760 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:29 crc kubenswrapper[5118]: E0121 00:10:29.677563 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:29 crc kubenswrapper[5118]: E0121 00:10:29.777694 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:29 crc kubenswrapper[5118]: E0121 00:10:29.878280 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:29 crc kubenswrapper[5118]: I0121 00:10:29.975925 5118 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 00:10:29 crc kubenswrapper[5118]: I0121 00:10:29.977345 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:29 crc kubenswrapper[5118]: I0121 00:10:29.977408 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:29 crc kubenswrapper[5118]: I0121 00:10:29.977423 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:29 crc kubenswrapper[5118]: E0121 00:10:29.978010 5118 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 00:10:29 crc kubenswrapper[5118]: I0121 00:10:29.978335 5118 scope.go:117] "RemoveContainer" containerID="4192feefd5cfa0d044d233324b963c0d52fc469217c4faad785dda349a30a38b" Jan 21 00:10:29 crc kubenswrapper[5118]: E0121 00:10:29.978572 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 21 00:10:29 crc kubenswrapper[5118]: E0121 00:10:29.978989 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:29 crc kubenswrapper[5118]: I0121 00:10:29.991919 5118 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Jan 21 00:10:30 crc kubenswrapper[5118]: E0121 00:10:30.079376 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:30 crc kubenswrapper[5118]: E0121 00:10:30.180367 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:30 crc kubenswrapper[5118]: E0121 00:10:30.243943 5118 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Jan 21 00:10:30 crc kubenswrapper[5118]: I0121 00:10:30.247964 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:30 crc kubenswrapper[5118]: I0121 00:10:30.247999 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:30 crc kubenswrapper[5118]: I0121 00:10:30.248008 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:30 crc kubenswrapper[5118]: I0121 00:10:30.248020 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:30 crc kubenswrapper[5118]: I0121 00:10:30.248030 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:30Z","lastTransitionTime":"2026-01-21T00:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:30 crc kubenswrapper[5118]: E0121 00:10:30.256935 5118 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400464Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861264Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:10:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:10:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:10:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:10:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"134a100e-afd8-41bd-8bdc-3d8d9cbfad99\\\",\\\"systemUUID\\\":\\\"78a64d73-f919-4466-a9b9-ec34ac96c5c7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:30 crc kubenswrapper[5118]: I0121 00:10:30.260418 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:30 crc kubenswrapper[5118]: I0121 00:10:30.260456 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:30 crc kubenswrapper[5118]: I0121 00:10:30.260468 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:30 crc kubenswrapper[5118]: I0121 00:10:30.260484 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:30 crc kubenswrapper[5118]: I0121 00:10:30.260495 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:30Z","lastTransitionTime":"2026-01-21T00:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:30 crc kubenswrapper[5118]: E0121 00:10:30.268462 5118 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400464Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861264Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:10:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:10:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:10:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:10:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"134a100e-afd8-41bd-8bdc-3d8d9cbfad99\\\",\\\"systemUUID\\\":\\\"78a64d73-f919-4466-a9b9-ec34ac96c5c7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:30 crc kubenswrapper[5118]: I0121 00:10:30.271857 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:30 crc kubenswrapper[5118]: I0121 00:10:30.271918 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:30 crc kubenswrapper[5118]: I0121 00:10:30.271930 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:30 crc kubenswrapper[5118]: I0121 00:10:30.271948 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:30 crc kubenswrapper[5118]: I0121 00:10:30.271958 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:30Z","lastTransitionTime":"2026-01-21T00:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:30 crc kubenswrapper[5118]: E0121 00:10:30.279997 5118 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400464Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861264Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:10:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:10:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:10:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:10:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"134a100e-afd8-41bd-8bdc-3d8d9cbfad99\\\",\\\"systemUUID\\\":\\\"78a64d73-f919-4466-a9b9-ec34ac96c5c7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:30 crc kubenswrapper[5118]: I0121 00:10:30.282953 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:30 crc kubenswrapper[5118]: I0121 00:10:30.282990 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:30 crc kubenswrapper[5118]: I0121 00:10:30.283003 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:30 crc kubenswrapper[5118]: I0121 00:10:30.283017 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:30 crc kubenswrapper[5118]: I0121 00:10:30.283026 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:30Z","lastTransitionTime":"2026-01-21T00:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:30 crc kubenswrapper[5118]: E0121 00:10:30.292086 5118 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400464Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861264Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:10:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:10:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:10:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:10:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"134a100e-afd8-41bd-8bdc-3d8d9cbfad99\\\",\\\"systemUUID\\\":\\\"78a64d73-f919-4466-a9b9-ec34ac96c5c7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:30 crc kubenswrapper[5118]: E0121 00:10:30.292249 5118 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Jan 21 00:10:30 crc kubenswrapper[5118]: E0121 00:10:30.292275 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:30 crc kubenswrapper[5118]: E0121 00:10:30.393310 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:30 crc kubenswrapper[5118]: E0121 00:10:30.494055 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:30 crc kubenswrapper[5118]: E0121 00:10:30.594265 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:30 crc kubenswrapper[5118]: E0121 00:10:30.694659 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:30 crc kubenswrapper[5118]: E0121 00:10:30.795468 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:30 crc kubenswrapper[5118]: E0121 00:10:30.896421 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:30 crc kubenswrapper[5118]: E0121 00:10:30.997543 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:31 crc kubenswrapper[5118]: E0121 00:10:31.098614 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:31 crc kubenswrapper[5118]: E0121 00:10:31.199364 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:31 crc kubenswrapper[5118]: E0121 00:10:31.299506 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:31 crc kubenswrapper[5118]: E0121 00:10:31.399758 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:31 crc kubenswrapper[5118]: E0121 00:10:31.500542 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:31 crc kubenswrapper[5118]: E0121 00:10:31.601526 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:31 crc kubenswrapper[5118]: E0121 00:10:31.702543 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:31 crc kubenswrapper[5118]: E0121 00:10:31.803658 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:31 crc kubenswrapper[5118]: E0121 00:10:31.904794 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:32 crc kubenswrapper[5118]: E0121 00:10:32.005680 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:32 crc kubenswrapper[5118]: E0121 00:10:32.106205 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:32 crc kubenswrapper[5118]: E0121 00:10:32.207227 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:32 crc kubenswrapper[5118]: E0121 00:10:32.308224 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:32 crc kubenswrapper[5118]: E0121 00:10:32.408786 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:32 crc kubenswrapper[5118]: E0121 00:10:32.508941 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:32 crc kubenswrapper[5118]: E0121 00:10:32.609573 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:32 crc kubenswrapper[5118]: E0121 00:10:32.710461 5118 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 00:10:32 crc kubenswrapper[5118]: I0121 00:10:32.749762 5118 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Jan 21 00:10:32 crc kubenswrapper[5118]: I0121 00:10:32.786860 5118 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 00:10:32 crc kubenswrapper[5118]: I0121 00:10:32.806364 5118 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 00:10:32 crc kubenswrapper[5118]: I0121 00:10:32.812928 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:32 crc kubenswrapper[5118]: I0121 00:10:32.812969 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:32 crc kubenswrapper[5118]: I0121 00:10:32.812982 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:32 crc kubenswrapper[5118]: I0121 00:10:32.813016 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:32 crc kubenswrapper[5118]: I0121 00:10:32.813030 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:32Z","lastTransitionTime":"2026-01-21T00:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:32 crc kubenswrapper[5118]: I0121 00:10:32.908110 5118 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 00:10:32 crc kubenswrapper[5118]: I0121 00:10:32.915416 5118 apiserver.go:52] "Watching apiserver" Jan 21 00:10:32 crc kubenswrapper[5118]: I0121 00:10:32.915936 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:32 crc kubenswrapper[5118]: I0121 00:10:32.915981 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:32 crc kubenswrapper[5118]: I0121 00:10:32.915993 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:32 crc kubenswrapper[5118]: I0121 00:10:32.916012 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:32 crc kubenswrapper[5118]: I0121 00:10:32.916023 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:32Z","lastTransitionTime":"2026-01-21T00:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:32 crc kubenswrapper[5118]: I0121 00:10:32.924428 5118 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Jan 21 00:10:32 crc kubenswrapper[5118]: I0121 00:10:32.924937 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc","openshift-multus/multus-qcqwq","openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv","openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kzdr6","openshift-ovn-kubernetes/ovnkube-node-h8fs2","openshift-image-registry/node-ca-9sftt","openshift-network-node-identity/network-node-identity-dgvkt","openshift-kube-apiserver/kube-apiserver-crc","openshift-machine-config-operator/machine-config-daemon-22r9n","openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5","openshift-multus/multus-additional-cni-plugins-d4lsz","openshift-multus/network-metrics-daemon-9hvtf","openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6","openshift-network-diagnostics/network-check-target-fhkjl","openshift-network-operator/iptables-alerter-5jnd7","openshift-dns/node-resolver-znhzw"] Jan 21 00:10:32 crc kubenswrapper[5118]: I0121 00:10:32.926422 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 21 00:10:32 crc kubenswrapper[5118]: I0121 00:10:32.930801 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 21 00:10:32 crc kubenswrapper[5118]: E0121 00:10:32.930914 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 21 00:10:32 crc kubenswrapper[5118]: I0121 00:10:32.930928 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 21 00:10:32 crc kubenswrapper[5118]: E0121 00:10:32.931680 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 21 00:10:32 crc kubenswrapper[5118]: I0121 00:10:32.933735 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Jan 21 00:10:32 crc kubenswrapper[5118]: I0121 00:10:32.933924 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Jan 21 00:10:32 crc kubenswrapper[5118]: I0121 00:10:32.934492 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"metrics-tls\"" Jan 21 00:10:32 crc kubenswrapper[5118]: I0121 00:10:32.934754 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 21 00:10:32 crc kubenswrapper[5118]: I0121 00:10:32.935935 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 21 00:10:32 crc kubenswrapper[5118]: I0121 00:10:32.936193 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Jan 21 00:10:32 crc kubenswrapper[5118]: I0121 00:10:32.938132 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"env-overrides\"" Jan 21 00:10:32 crc kubenswrapper[5118]: I0121 00:10:32.938709 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\"" Jan 21 00:10:32 crc kubenswrapper[5118]: I0121 00:10:32.938818 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\"" Jan 21 00:10:32 crc kubenswrapper[5118]: I0121 00:10:32.939213 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\"" Jan 21 00:10:32 crc kubenswrapper[5118]: I0121 00:10:32.939706 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\"" Jan 21 00:10:32 crc kubenswrapper[5118]: I0121 00:10:32.948034 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:32 crc kubenswrapper[5118]: I0121 00:10:32.958343 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:32 crc kubenswrapper[5118]: I0121 00:10:32.968244 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:32 crc kubenswrapper[5118]: I0121 00:10:32.976885 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:32 crc kubenswrapper[5118]: I0121 00:10:32.985592 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:32Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:32 crc kubenswrapper[5118]: I0121 00:10:32.998183 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:32Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:32 crc kubenswrapper[5118]: I0121 00:10:32.998506 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/91e46657-55ca-43e7-9a43-6bb875c7debf-ovn-node-metrics-cert\") pod \"ovnkube-node-h8fs2\" (UID: \"91e46657-55ca-43e7-9a43-6bb875c7debf\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fs2" Jan 21 00:10:32 crc kubenswrapper[5118]: I0121 00:10:32.998613 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 00:10:32 crc kubenswrapper[5118]: E0121 00:10:32.998687 5118 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 00:10:32 crc kubenswrapper[5118]: E0121 00:10:32.998786 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-21 00:10:33.498765748 +0000 UTC m=+88.823012766 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 00:10:32 crc kubenswrapper[5118]: I0121 00:10:32.998696 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/91e46657-55ca-43e7-9a43-6bb875c7debf-host-kubelet\") pod \"ovnkube-node-h8fs2\" (UID: \"91e46657-55ca-43e7-9a43-6bb875c7debf\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fs2" Jan 21 00:10:32 crc kubenswrapper[5118]: I0121 00:10:32.998945 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/91e46657-55ca-43e7-9a43-6bb875c7debf-host-run-netns\") pod \"ovnkube-node-h8fs2\" (UID: \"91e46657-55ca-43e7-9a43-6bb875c7debf\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fs2" Jan 21 00:10:32 crc kubenswrapper[5118]: I0121 00:10:32.999028 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfh6k\" (UniqueName: \"kubernetes.io/projected/91e46657-55ca-43e7-9a43-6bb875c7debf-kube-api-access-bfh6k\") pod \"ovnkube-node-h8fs2\" (UID: \"91e46657-55ca-43e7-9a43-6bb875c7debf\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fs2" Jan 21 00:10:32 crc kubenswrapper[5118]: I0121 00:10:32.999141 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/91e46657-55ca-43e7-9a43-6bb875c7debf-host-slash\") pod \"ovnkube-node-h8fs2\" (UID: \"91e46657-55ca-43e7-9a43-6bb875c7debf\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fs2" Jan 21 00:10:32 crc kubenswrapper[5118]: I0121 00:10:32.999273 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8nt2j\" (UniqueName: \"kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 21 00:10:32 crc kubenswrapper[5118]: I0121 00:10:32.999361 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/91e46657-55ca-43e7-9a43-6bb875c7debf-var-lib-openvswitch\") pod \"ovnkube-node-h8fs2\" (UID: \"91e46657-55ca-43e7-9a43-6bb875c7debf\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fs2" Jan 21 00:10:32 crc kubenswrapper[5118]: I0121 00:10:32.999437 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/91e46657-55ca-43e7-9a43-6bb875c7debf-run-openvswitch\") pod \"ovnkube-node-h8fs2\" (UID: \"91e46657-55ca-43e7-9a43-6bb875c7debf\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fs2" Jan 21 00:10:32 crc kubenswrapper[5118]: I0121 00:10:32.999679 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/91e46657-55ca-43e7-9a43-6bb875c7debf-node-log\") pod \"ovnkube-node-h8fs2\" (UID: \"91e46657-55ca-43e7-9a43-6bb875c7debf\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fs2" Jan 21 00:10:32 crc kubenswrapper[5118]: I0121 00:10:32.999738 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/91e46657-55ca-43e7-9a43-6bb875c7debf-host-cni-netd\") pod \"ovnkube-node-h8fs2\" (UID: \"91e46657-55ca-43e7-9a43-6bb875c7debf\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fs2" Jan 21 00:10:32 crc kubenswrapper[5118]: I0121 00:10:32.999768 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/91e46657-55ca-43e7-9a43-6bb875c7debf-host-cni-bin\") pod \"ovnkube-node-h8fs2\" (UID: \"91e46657-55ca-43e7-9a43-6bb875c7debf\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fs2" Jan 21 00:10:32 crc kubenswrapper[5118]: I0121 00:10:32.999792 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/91e46657-55ca-43e7-9a43-6bb875c7debf-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-h8fs2\" (UID: \"91e46657-55ca-43e7-9a43-6bb875c7debf\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fs2" Jan 21 00:10:32 crc kubenswrapper[5118]: I0121 00:10:32.999815 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/91e46657-55ca-43e7-9a43-6bb875c7debf-ovnkube-config\") pod \"ovnkube-node-h8fs2\" (UID: \"91e46657-55ca-43e7-9a43-6bb875c7debf\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fs2" Jan 21 00:10:32 crc kubenswrapper[5118]: I0121 00:10:32.999844 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 21 00:10:32 crc kubenswrapper[5118]: I0121 00:10:32.999866 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 21 00:10:32 crc kubenswrapper[5118]: I0121 00:10:32.999895 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/91e46657-55ca-43e7-9a43-6bb875c7debf-host-run-ovn-kubernetes\") pod \"ovnkube-node-h8fs2\" (UID: \"91e46657-55ca-43e7-9a43-6bb875c7debf\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fs2" Jan 21 00:10:32 crc kubenswrapper[5118]: I0121 00:10:32.999918 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 21 00:10:32 crc kubenswrapper[5118]: I0121 00:10:32.999937 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/91e46657-55ca-43e7-9a43-6bb875c7debf-run-ovn\") pod \"ovnkube-node-h8fs2\" (UID: \"91e46657-55ca-43e7-9a43-6bb875c7debf\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fs2" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:32.999959 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/91e46657-55ca-43e7-9a43-6bb875c7debf-env-overrides\") pod \"ovnkube-node-h8fs2\" (UID: \"91e46657-55ca-43e7-9a43-6bb875c7debf\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fs2" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:32.999986 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.000011 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.000037 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/91e46657-55ca-43e7-9a43-6bb875c7debf-systemd-units\") pod \"ovnkube-node-h8fs2\" (UID: \"91e46657-55ca-43e7-9a43-6bb875c7debf\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fs2" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.000057 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/91e46657-55ca-43e7-9a43-6bb875c7debf-run-systemd\") pod \"ovnkube-node-h8fs2\" (UID: \"91e46657-55ca-43e7-9a43-6bb875c7debf\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fs2" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.000085 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dsgwk\" (UniqueName: \"kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.000124 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.000193 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/91e46657-55ca-43e7-9a43-6bb875c7debf-etc-openvswitch\") pod \"ovnkube-node-h8fs2\" (UID: \"91e46657-55ca-43e7-9a43-6bb875c7debf\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fs2" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.000220 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.000245 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/91e46657-55ca-43e7-9a43-6bb875c7debf-log-socket\") pod \"ovnkube-node-h8fs2\" (UID: \"91e46657-55ca-43e7-9a43-6bb875c7debf\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fs2" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.000289 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/91e46657-55ca-43e7-9a43-6bb875c7debf-ovnkube-script-lib\") pod \"ovnkube-node-h8fs2\" (UID: \"91e46657-55ca-43e7-9a43-6bb875c7debf\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fs2" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.000320 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.000349 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.000379 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-m7xz2\" (UniqueName: \"kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.000405 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 21 00:10:33 crc kubenswrapper[5118]: E0121 00:10:33.000780 5118 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 00:10:33 crc kubenswrapper[5118]: E0121 00:10:33.000838 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-21 00:10:33.500825313 +0000 UTC m=+88.825072331 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.004687 5118 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 00:10:33 crc kubenswrapper[5118]: E0121 00:10:33.013979 5118 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 00:10:33 crc kubenswrapper[5118]: E0121 00:10:33.014041 5118 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 00:10:33 crc kubenswrapper[5118]: E0121 00:10:33.014057 5118 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 00:10:33 crc kubenswrapper[5118]: E0121 00:10:33.014123 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-21 00:10:33.514106746 +0000 UTC m=+88.838353764 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.017639 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.017682 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.017693 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.017709 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.017721 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:33Z","lastTransitionTime":"2026-01-21T00:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:33 crc kubenswrapper[5118]: E0121 00:10:33.018895 5118 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 00:10:33 crc kubenswrapper[5118]: E0121 00:10:33.018919 5118 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 00:10:33 crc kubenswrapper[5118]: E0121 00:10:33.018931 5118 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 00:10:33 crc kubenswrapper[5118]: E0121 00:10:33.018985 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-21 00:10:33.518972815 +0000 UTC m=+88.843219833 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.101214 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/91e46657-55ca-43e7-9a43-6bb875c7debf-env-overrides\") pod \"ovnkube-node-h8fs2\" (UID: \"91e46657-55ca-43e7-9a43-6bb875c7debf\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fs2" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.101284 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/91e46657-55ca-43e7-9a43-6bb875c7debf-systemd-units\") pod \"ovnkube-node-h8fs2\" (UID: \"91e46657-55ca-43e7-9a43-6bb875c7debf\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fs2" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.101311 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/91e46657-55ca-43e7-9a43-6bb875c7debf-run-systemd\") pod \"ovnkube-node-h8fs2\" (UID: \"91e46657-55ca-43e7-9a43-6bb875c7debf\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fs2" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.101386 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/91e46657-55ca-43e7-9a43-6bb875c7debf-etc-openvswitch\") pod \"ovnkube-node-h8fs2\" (UID: \"91e46657-55ca-43e7-9a43-6bb875c7debf\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fs2" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.101417 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/91e46657-55ca-43e7-9a43-6bb875c7debf-log-socket\") pod \"ovnkube-node-h8fs2\" (UID: \"91e46657-55ca-43e7-9a43-6bb875c7debf\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fs2" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.101415 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/91e46657-55ca-43e7-9a43-6bb875c7debf-systemd-units\") pod \"ovnkube-node-h8fs2\" (UID: \"91e46657-55ca-43e7-9a43-6bb875c7debf\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fs2" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.101439 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/91e46657-55ca-43e7-9a43-6bb875c7debf-ovnkube-script-lib\") pod \"ovnkube-node-h8fs2\" (UID: \"91e46657-55ca-43e7-9a43-6bb875c7debf\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fs2" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.101493 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/91e46657-55ca-43e7-9a43-6bb875c7debf-etc-openvswitch\") pod \"ovnkube-node-h8fs2\" (UID: \"91e46657-55ca-43e7-9a43-6bb875c7debf\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fs2" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.101528 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.101512 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/91e46657-55ca-43e7-9a43-6bb875c7debf-run-systemd\") pod \"ovnkube-node-h8fs2\" (UID: \"91e46657-55ca-43e7-9a43-6bb875c7debf\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fs2" Jan 21 00:10:33 crc kubenswrapper[5118]: E0121 00:10:33.101508 5118 configmap.go:193] Couldn't get configMap openshift-ovn-kubernetes/ovnkube-script-lib: object "openshift-ovn-kubernetes"/"ovnkube-script-lib" not registered Jan 21 00:10:33 crc kubenswrapper[5118]: E0121 00:10:33.101527 5118 configmap.go:193] Couldn't get configMap openshift-ovn-kubernetes/env-overrides: object "openshift-ovn-kubernetes"/"env-overrides" not registered Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.101599 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.101633 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/91e46657-55ca-43e7-9a43-6bb875c7debf-ovn-node-metrics-cert\") pod \"ovnkube-node-h8fs2\" (UID: \"91e46657-55ca-43e7-9a43-6bb875c7debf\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fs2" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.101639 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/91e46657-55ca-43e7-9a43-6bb875c7debf-log-socket\") pod \"ovnkube-node-h8fs2\" (UID: \"91e46657-55ca-43e7-9a43-6bb875c7debf\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fs2" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.101660 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/91e46657-55ca-43e7-9a43-6bb875c7debf-host-kubelet\") pod \"ovnkube-node-h8fs2\" (UID: \"91e46657-55ca-43e7-9a43-6bb875c7debf\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fs2" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.101681 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/91e46657-55ca-43e7-9a43-6bb875c7debf-host-kubelet\") pod \"ovnkube-node-h8fs2\" (UID: \"91e46657-55ca-43e7-9a43-6bb875c7debf\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fs2" Jan 21 00:10:33 crc kubenswrapper[5118]: E0121 00:10:33.101710 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/91e46657-55ca-43e7-9a43-6bb875c7debf-ovnkube-script-lib podName:91e46657-55ca-43e7-9a43-6bb875c7debf nodeName:}" failed. No retries permitted until 2026-01-21 00:10:33.601685301 +0000 UTC m=+88.925932379 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "ovnkube-script-lib" (UniqueName: "kubernetes.io/configmap/91e46657-55ca-43e7-9a43-6bb875c7debf-ovnkube-script-lib") pod "ovnkube-node-h8fs2" (UID: "91e46657-55ca-43e7-9a43-6bb875c7debf") : object "openshift-ovn-kubernetes"/"ovnkube-script-lib" not registered Jan 21 00:10:33 crc kubenswrapper[5118]: E0121 00:10:33.101761 5118 secret.go:189] Couldn't get secret openshift-ovn-kubernetes/ovn-node-metrics-cert: object "openshift-ovn-kubernetes"/"ovn-node-metrics-cert" not registered Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.101769 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/91e46657-55ca-43e7-9a43-6bb875c7debf-host-run-netns\") pod \"ovnkube-node-h8fs2\" (UID: \"91e46657-55ca-43e7-9a43-6bb875c7debf\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fs2" Jan 21 00:10:33 crc kubenswrapper[5118]: E0121 00:10:33.101804 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/91e46657-55ca-43e7-9a43-6bb875c7debf-env-overrides podName:91e46657-55ca-43e7-9a43-6bb875c7debf nodeName:}" failed. No retries permitted until 2026-01-21 00:10:33.601796474 +0000 UTC m=+88.926043492 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "env-overrides" (UniqueName: "kubernetes.io/configmap/91e46657-55ca-43e7-9a43-6bb875c7debf-env-overrides") pod "ovnkube-node-h8fs2" (UID: "91e46657-55ca-43e7-9a43-6bb875c7debf") : object "openshift-ovn-kubernetes"/"env-overrides" not registered Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.101818 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/91e46657-55ca-43e7-9a43-6bb875c7debf-host-run-netns\") pod \"ovnkube-node-h8fs2\" (UID: \"91e46657-55ca-43e7-9a43-6bb875c7debf\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fs2" Jan 21 00:10:33 crc kubenswrapper[5118]: E0121 00:10:33.101833 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/91e46657-55ca-43e7-9a43-6bb875c7debf-ovn-node-metrics-cert podName:91e46657-55ca-43e7-9a43-6bb875c7debf nodeName:}" failed. No retries permitted until 2026-01-21 00:10:33.601815274 +0000 UTC m=+88.926062312 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "ovn-node-metrics-cert" (UniqueName: "kubernetes.io/secret/91e46657-55ca-43e7-9a43-6bb875c7debf-ovn-node-metrics-cert") pod "ovnkube-node-h8fs2" (UID: "91e46657-55ca-43e7-9a43-6bb875c7debf") : object "openshift-ovn-kubernetes"/"ovn-node-metrics-cert" not registered Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.101860 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bfh6k\" (UniqueName: \"kubernetes.io/projected/91e46657-55ca-43e7-9a43-6bb875c7debf-kube-api-access-bfh6k\") pod \"ovnkube-node-h8fs2\" (UID: \"91e46657-55ca-43e7-9a43-6bb875c7debf\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fs2" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.101889 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/91e46657-55ca-43e7-9a43-6bb875c7debf-host-slash\") pod \"ovnkube-node-h8fs2\" (UID: \"91e46657-55ca-43e7-9a43-6bb875c7debf\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fs2" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.101913 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/91e46657-55ca-43e7-9a43-6bb875c7debf-var-lib-openvswitch\") pod \"ovnkube-node-h8fs2\" (UID: \"91e46657-55ca-43e7-9a43-6bb875c7debf\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fs2" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.101934 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/91e46657-55ca-43e7-9a43-6bb875c7debf-run-openvswitch\") pod \"ovnkube-node-h8fs2\" (UID: \"91e46657-55ca-43e7-9a43-6bb875c7debf\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fs2" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.101954 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/91e46657-55ca-43e7-9a43-6bb875c7debf-node-log\") pod \"ovnkube-node-h8fs2\" (UID: \"91e46657-55ca-43e7-9a43-6bb875c7debf\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fs2" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.102007 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/91e46657-55ca-43e7-9a43-6bb875c7debf-var-lib-openvswitch\") pod \"ovnkube-node-h8fs2\" (UID: \"91e46657-55ca-43e7-9a43-6bb875c7debf\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fs2" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.102011 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/91e46657-55ca-43e7-9a43-6bb875c7debf-host-cni-netd\") pod \"ovnkube-node-h8fs2\" (UID: \"91e46657-55ca-43e7-9a43-6bb875c7debf\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fs2" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.102053 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/91e46657-55ca-43e7-9a43-6bb875c7debf-host-cni-netd\") pod \"ovnkube-node-h8fs2\" (UID: \"91e46657-55ca-43e7-9a43-6bb875c7debf\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fs2" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.102097 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/91e46657-55ca-43e7-9a43-6bb875c7debf-host-cni-bin\") pod \"ovnkube-node-h8fs2\" (UID: \"91e46657-55ca-43e7-9a43-6bb875c7debf\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fs2" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.102131 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/91e46657-55ca-43e7-9a43-6bb875c7debf-host-slash\") pod \"ovnkube-node-h8fs2\" (UID: \"91e46657-55ca-43e7-9a43-6bb875c7debf\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fs2" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.102208 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/91e46657-55ca-43e7-9a43-6bb875c7debf-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-h8fs2\" (UID: \"91e46657-55ca-43e7-9a43-6bb875c7debf\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fs2" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.102210 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/91e46657-55ca-43e7-9a43-6bb875c7debf-run-openvswitch\") pod \"ovnkube-node-h8fs2\" (UID: \"91e46657-55ca-43e7-9a43-6bb875c7debf\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fs2" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.102257 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/91e46657-55ca-43e7-9a43-6bb875c7debf-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-h8fs2\" (UID: \"91e46657-55ca-43e7-9a43-6bb875c7debf\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fs2" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.102264 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/91e46657-55ca-43e7-9a43-6bb875c7debf-node-log\") pod \"ovnkube-node-h8fs2\" (UID: \"91e46657-55ca-43e7-9a43-6bb875c7debf\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fs2" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.102302 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/91e46657-55ca-43e7-9a43-6bb875c7debf-ovnkube-config\") pod \"ovnkube-node-h8fs2\" (UID: \"91e46657-55ca-43e7-9a43-6bb875c7debf\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fs2" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.102321 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/91e46657-55ca-43e7-9a43-6bb875c7debf-host-cni-bin\") pod \"ovnkube-node-h8fs2\" (UID: \"91e46657-55ca-43e7-9a43-6bb875c7debf\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fs2" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.102328 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/91e46657-55ca-43e7-9a43-6bb875c7debf-host-run-ovn-kubernetes\") pod \"ovnkube-node-h8fs2\" (UID: \"91e46657-55ca-43e7-9a43-6bb875c7debf\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fs2" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.102347 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/91e46657-55ca-43e7-9a43-6bb875c7debf-host-run-ovn-kubernetes\") pod \"ovnkube-node-h8fs2\" (UID: \"91e46657-55ca-43e7-9a43-6bb875c7debf\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fs2" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.102369 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.102395 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/91e46657-55ca-43e7-9a43-6bb875c7debf-run-ovn\") pod \"ovnkube-node-h8fs2\" (UID: \"91e46657-55ca-43e7-9a43-6bb875c7debf\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fs2" Jan 21 00:10:33 crc kubenswrapper[5118]: E0121 00:10:33.102396 5118 configmap.go:193] Couldn't get configMap openshift-ovn-kubernetes/ovnkube-config: object "openshift-ovn-kubernetes"/"ovnkube-config" not registered Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.102434 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 21 00:10:33 crc kubenswrapper[5118]: E0121 00:10:33.102450 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/91e46657-55ca-43e7-9a43-6bb875c7debf-ovnkube-config podName:91e46657-55ca-43e7-9a43-6bb875c7debf nodeName:}" failed. No retries permitted until 2026-01-21 00:10:33.602428511 +0000 UTC m=+88.926675569 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "ovnkube-config" (UniqueName: "kubernetes.io/configmap/91e46657-55ca-43e7-9a43-6bb875c7debf-ovnkube-config") pod "ovnkube-node-h8fs2" (UID: "91e46657-55ca-43e7-9a43-6bb875c7debf") : object "openshift-ovn-kubernetes"/"ovnkube-config" not registered Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.102467 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/91e46657-55ca-43e7-9a43-6bb875c7debf-run-ovn\") pod \"ovnkube-node-h8fs2\" (UID: \"91e46657-55ca-43e7-9a43-6bb875c7debf\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fs2" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.108003 5118 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-etcd/etcd-crc" Jan 21 00:10:33 crc kubenswrapper[5118]: E0121 00:10:33.118979 5118 projected.go:289] Couldn't get configMap openshift-ovn-kubernetes/kube-root-ca.crt: object "openshift-ovn-kubernetes"/"kube-root-ca.crt" not registered Jan 21 00:10:33 crc kubenswrapper[5118]: E0121 00:10:33.119012 5118 projected.go:289] Couldn't get configMap openshift-ovn-kubernetes/openshift-service-ca.crt: object "openshift-ovn-kubernetes"/"openshift-service-ca.crt" not registered Jan 21 00:10:33 crc kubenswrapper[5118]: E0121 00:10:33.119023 5118 projected.go:194] Error preparing data for projected volume kube-api-access-bfh6k for pod openshift-ovn-kubernetes/ovnkube-node-h8fs2: [object "openshift-ovn-kubernetes"/"kube-root-ca.crt" not registered, object "openshift-ovn-kubernetes"/"openshift-service-ca.crt" not registered] Jan 21 00:10:33 crc kubenswrapper[5118]: E0121 00:10:33.119079 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/91e46657-55ca-43e7-9a43-6bb875c7debf-kube-api-access-bfh6k podName:91e46657-55ca-43e7-9a43-6bb875c7debf nodeName:}" failed. No retries permitted until 2026-01-21 00:10:33.619060652 +0000 UTC m=+88.943307670 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-bfh6k" (UniqueName: "kubernetes.io/projected/91e46657-55ca-43e7-9a43-6bb875c7debf-kube-api-access-bfh6k") pod "ovnkube-node-h8fs2" (UID: "91e46657-55ca-43e7-9a43-6bb875c7debf") : [object "openshift-ovn-kubernetes"/"kube-root-ca.crt" not registered, object "openshift-ovn-kubernetes"/"openshift-service-ca.crt" not registered] Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.119901 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.119929 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.119937 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.119951 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.119961 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:33Z","lastTransitionTime":"2026-01-21T00:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.185607 5118 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.186334 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.186651 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.186894 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.193226 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsgwk\" (UniqueName: \"kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.193290 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nt2j\" (UniqueName: \"kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.193628 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.195338 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.195589 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7xz2\" (UniqueName: \"kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.222801 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.223296 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.223411 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.223535 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.223616 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:33Z","lastTransitionTime":"2026-01-21T00:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.245040 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.260396 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 21 00:10:33 crc kubenswrapper[5118]: E0121 00:10:33.274180 5118 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 21 00:10:33 crc kubenswrapper[5118]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,Command:[/bin/bash -c #!/bin/bash Jan 21 00:10:33 crc kubenswrapper[5118]: set -o allexport Jan 21 00:10:33 crc kubenswrapper[5118]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Jan 21 00:10:33 crc kubenswrapper[5118]: source /etc/kubernetes/apiserver-url.env Jan 21 00:10:33 crc kubenswrapper[5118]: else Jan 21 00:10:33 crc kubenswrapper[5118]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Jan 21 00:10:33 crc kubenswrapper[5118]: exit 1 Jan 21 00:10:33 crc kubenswrapper[5118]: fi Jan 21 00:10:33 crc kubenswrapper[5118]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Jan 21 00:10:33 crc kubenswrapper[5118]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.20.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:951276a60f15185a05902cf1ec49b6db3e4f049ec638828b336aed496f8dfc45,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b5000f8f055fd8f734ef74afbd9bd5333a38345cbc4959ddaad728b8394bccd4,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be136d591a0eeb3f7bedf04aabb5481a23b6645316d5cef3cd5be1787344c2b5,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91997a073272252cac9cd31915ec74217637c55d1abc725107c6eb677ddddc9b,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a974f04d4aefdb39bf2d4649b24e7e0e87685afa3d07ca46234f1a0c5688e4b,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7xz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-7bdcf4f5bd-7fjxv_openshift-network-operator(34177974-8d82-49d2-a763-391d0df3bbd8): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 21 00:10:33 crc kubenswrapper[5118]: > logger="UnhandledError" Jan 21 00:10:33 crc kubenswrapper[5118]: E0121 00:10:33.275595 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" podUID="34177974-8d82-49d2-a763-391d0df3bbd8" Jan 21 00:10:33 crc kubenswrapper[5118]: W0121 00:10:33.284848 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc4541ce_7789_4670_bc75_5c2868e52ce0.slice/crio-28aa37338ec042f3c60ffc19a9fdf0a88d6213e99e59978871b02ab2aed03dcb WatchSource:0}: Error finding container 28aa37338ec042f3c60ffc19a9fdf0a88d6213e99e59978871b02ab2aed03dcb: Status 404 returned error can't find the container with id 28aa37338ec042f3c60ffc19a9fdf0a88d6213e99e59978871b02ab2aed03dcb Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.327388 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.327457 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.327468 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.327486 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.327499 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:33Z","lastTransitionTime":"2026-01-21T00:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.429467 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.429509 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.429518 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.429556 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.429574 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:33Z","lastTransitionTime":"2026-01-21T00:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:33 crc kubenswrapper[5118]: E0121 00:10:33.436848 5118 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 21 00:10:33 crc kubenswrapper[5118]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Jan 21 00:10:33 crc kubenswrapper[5118]: if [[ -f "/env/_master" ]]; then Jan 21 00:10:33 crc kubenswrapper[5118]: set -o allexport Jan 21 00:10:33 crc kubenswrapper[5118]: source "/env/_master" Jan 21 00:10:33 crc kubenswrapper[5118]: set +o allexport Jan 21 00:10:33 crc kubenswrapper[5118]: fi Jan 21 00:10:33 crc kubenswrapper[5118]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Jan 21 00:10:33 crc kubenswrapper[5118]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Jan 21 00:10:33 crc kubenswrapper[5118]: ho_enable="--enable-hybrid-overlay" Jan 21 00:10:33 crc kubenswrapper[5118]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Jan 21 00:10:33 crc kubenswrapper[5118]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Jan 21 00:10:33 crc kubenswrapper[5118]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Jan 21 00:10:33 crc kubenswrapper[5118]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Jan 21 00:10:33 crc kubenswrapper[5118]: --webhook-cert-dir="/etc/webhook-cert" \ Jan 21 00:10:33 crc kubenswrapper[5118]: --webhook-host=127.0.0.1 \ Jan 21 00:10:33 crc kubenswrapper[5118]: --webhook-port=9743 \ Jan 21 00:10:33 crc kubenswrapper[5118]: ${ho_enable} \ Jan 21 00:10:33 crc kubenswrapper[5118]: --enable-interconnect \ Jan 21 00:10:33 crc kubenswrapper[5118]: --disable-approver \ Jan 21 00:10:33 crc kubenswrapper[5118]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Jan 21 00:10:33 crc kubenswrapper[5118]: --wait-for-kubernetes-api=200s \ Jan 21 00:10:33 crc kubenswrapper[5118]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Jan 21 00:10:33 crc kubenswrapper[5118]: --loglevel="${LOGLEVEL}" Jan 21 00:10:33 crc kubenswrapper[5118]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 21 00:10:33 crc kubenswrapper[5118]: > logger="UnhandledError" Jan 21 00:10:33 crc kubenswrapper[5118]: E0121 00:10:33.439460 5118 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 21 00:10:33 crc kubenswrapper[5118]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Jan 21 00:10:33 crc kubenswrapper[5118]: if [[ -f "/env/_master" ]]; then Jan 21 00:10:33 crc kubenswrapper[5118]: set -o allexport Jan 21 00:10:33 crc kubenswrapper[5118]: source "/env/_master" Jan 21 00:10:33 crc kubenswrapper[5118]: set +o allexport Jan 21 00:10:33 crc kubenswrapper[5118]: fi Jan 21 00:10:33 crc kubenswrapper[5118]: Jan 21 00:10:33 crc kubenswrapper[5118]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Jan 21 00:10:33 crc kubenswrapper[5118]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Jan 21 00:10:33 crc kubenswrapper[5118]: --disable-webhook \ Jan 21 00:10:33 crc kubenswrapper[5118]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Jan 21 00:10:33 crc kubenswrapper[5118]: --loglevel="${LOGLEVEL}" Jan 21 00:10:33 crc kubenswrapper[5118]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 21 00:10:33 crc kubenswrapper[5118]: > logger="UnhandledError" Jan 21 00:10:33 crc kubenswrapper[5118]: E0121 00:10:33.441061 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-dgvkt" podUID="fc4541ce-7789-4670-bc75-5c2868e52ce0" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.442059 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 00:10:33 crc kubenswrapper[5118]: E0121 00:10:33.442233 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.442382 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-h8fs2" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.451088 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.451292 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.451099 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.451786 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-l2v2m\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.451996 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.452257 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.452356 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.464964 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.475506 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.475669 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.477721 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.478073 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.478224 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-w9nzh\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.478568 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"proxy-tls\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.478793 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.486647 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.491052 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.502705 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:32Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.505485 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/50c45c57-9291-48d3-8022-00a314541104-serviceca\") pod \"node-ca-9sftt\" (UID: \"50c45c57-9291-48d3-8022-00a314541104\") " pod="openshift-image-registry/node-ca-9sftt" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.505581 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.505639 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/50c45c57-9291-48d3-8022-00a314541104-host\") pod \"node-ca-9sftt\" (UID: \"50c45c57-9291-48d3-8022-00a314541104\") " pod="openshift-image-registry/node-ca-9sftt" Jan 21 00:10:33 crc kubenswrapper[5118]: W0121 00:10:33.505655 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod428b39f5_eb1c_4f65_b7a4_eeb6e84860cc.slice/crio-644879c5a12ee5e6ee3edf2bd722a9b6a249bf9d2e4b58c6a66d37c62ffe51e0 WatchSource:0}: Error finding container 644879c5a12ee5e6ee3edf2bd722a9b6a249bf9d2e4b58c6a66d37c62ffe51e0: Status 404 returned error can't find the container with id 644879c5a12ee5e6ee3edf2bd722a9b6a249bf9d2e4b58c6a66d37c62ffe51e0 Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.505684 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxsjl\" (UniqueName: \"kubernetes.io/projected/44eb9bc7-60a3-421c-bf5e-d1d9a5026435-kube-api-access-vxsjl\") pod \"machine-config-daemon-22r9n\" (UID: \"44eb9bc7-60a3-421c-bf5e-d1d9a5026435\") " pod="openshift-machine-config-operator/machine-config-daemon-22r9n" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.505823 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5fh5\" (UniqueName: \"kubernetes.io/projected/50c45c57-9291-48d3-8022-00a314541104-kube-api-access-j5fh5\") pod \"node-ca-9sftt\" (UID: \"50c45c57-9291-48d3-8022-00a314541104\") " pod="openshift-image-registry/node-ca-9sftt" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.505857 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/44eb9bc7-60a3-421c-bf5e-d1d9a5026435-rootfs\") pod \"machine-config-daemon-22r9n\" (UID: \"44eb9bc7-60a3-421c-bf5e-d1d9a5026435\") " pod="openshift-machine-config-operator/machine-config-daemon-22r9n" Jan 21 00:10:33 crc kubenswrapper[5118]: E0121 00:10:33.505882 5118 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.505898 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.505932 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/44eb9bc7-60a3-421c-bf5e-d1d9a5026435-proxy-tls\") pod \"machine-config-daemon-22r9n\" (UID: \"44eb9bc7-60a3-421c-bf5e-d1d9a5026435\") " pod="openshift-machine-config-operator/machine-config-daemon-22r9n" Jan 21 00:10:33 crc kubenswrapper[5118]: E0121 00:10:33.505978 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-21 00:10:34.505948874 +0000 UTC m=+89.830196072 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.506025 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/44eb9bc7-60a3-421c-bf5e-d1d9a5026435-mcd-auth-proxy-config\") pod \"machine-config-daemon-22r9n\" (UID: \"44eb9bc7-60a3-421c-bf5e-d1d9a5026435\") " pod="openshift-machine-config-operator/machine-config-daemon-22r9n" Jan 21 00:10:33 crc kubenswrapper[5118]: E0121 00:10:33.506052 5118 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 00:10:33 crc kubenswrapper[5118]: E0121 00:10:33.506133 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-21 00:10:34.506113738 +0000 UTC m=+89.830360756 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.513576 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:32Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:33 crc kubenswrapper[5118]: E0121 00:10:33.520122 5118 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dsgwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-5jnd7_openshift-network-operator(428b39f5-eb1c-4f65-b7a4-eeb6e84860cc): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 21 00:10:33 crc kubenswrapper[5118]: E0121 00:10:33.521978 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-5jnd7" podUID="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.525904 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.531688 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.531748 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.531764 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.531782 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.531796 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:33Z","lastTransitionTime":"2026-01-21T00:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.548758 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h8fs2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91e46657-55ca-43e7-9a43-6bb875c7debf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfh6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfh6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfh6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfh6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfh6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfh6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfh6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfh6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfh6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:10:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h8fs2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.549484 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-9sftt" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.549867 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-d4lsz" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.552602 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.552870 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.553021 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.553298 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.553040 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-tjs74\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.553143 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-nwglk\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.553579 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.553613 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9hvtf" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.553081 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Jan 21 00:10:33 crc kubenswrapper[5118]: E0121 00:10:33.553694 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9hvtf" podUID="21105fbf-0225-4ba6-ba90-17808d5250c6" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.553823 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.553986 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.562209 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44eb9bc7-60a3-421c-bf5e-d1d9a5026435\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxsjl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxsjl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:10:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-22r9n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.565446 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kzdr6" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.568082 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-nl8tp\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.568317 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.571539 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.575789 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-qcqwq" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.578484 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.579768 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-g6kgg\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.580460 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.588300 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" event={"ID":"34177974-8d82-49d2-a763-391d0df3bbd8","Type":"ContainerStarted","Data":"19bea61500191964a388817cc0a31dabd54d654d5987ce55d17e62517ed80535"} Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.588468 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.588511 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"28aa37338ec042f3c60ffc19a9fdf0a88d6213e99e59978871b02ab2aed03dcb"} Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.588530 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.588477 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-znhzw" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.589402 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.589866 5118 scope.go:117] "RemoveContainer" containerID="4192feefd5cfa0d044d233324b963c0d52fc469217c4faad785dda349a30a38b" Jan 21 00:10:33 crc kubenswrapper[5118]: E0121 00:10:33.590023 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 21 00:10:33 crc kubenswrapper[5118]: E0121 00:10:33.590402 5118 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 21 00:10:33 crc kubenswrapper[5118]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,Command:[/bin/bash -c #!/bin/bash Jan 21 00:10:33 crc kubenswrapper[5118]: set -o allexport Jan 21 00:10:33 crc kubenswrapper[5118]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Jan 21 00:10:33 crc kubenswrapper[5118]: source /etc/kubernetes/apiserver-url.env Jan 21 00:10:33 crc kubenswrapper[5118]: else Jan 21 00:10:33 crc kubenswrapper[5118]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Jan 21 00:10:33 crc kubenswrapper[5118]: exit 1 Jan 21 00:10:33 crc kubenswrapper[5118]: fi Jan 21 00:10:33 crc kubenswrapper[5118]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Jan 21 00:10:33 crc kubenswrapper[5118]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.20.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:951276a60f15185a05902cf1ec49b6db3e4f049ec638828b336aed496f8dfc45,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b5000f8f055fd8f734ef74afbd9bd5333a38345cbc4959ddaad728b8394bccd4,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be136d591a0eeb3f7bedf04aabb5481a23b6645316d5cef3cd5be1787344c2b5,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91997a073272252cac9cd31915ec74217637c55d1abc725107c6eb677ddddc9b,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a974f04d4aefdb39bf2d4649b24e7e0e87685afa3d07ca46234f1a0c5688e4b,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7xz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-7bdcf4f5bd-7fjxv_openshift-network-operator(34177974-8d82-49d2-a763-391d0df3bbd8): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 21 00:10:33 crc kubenswrapper[5118]: > logger="UnhandledError" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.590680 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.590732 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.590992 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-tk7bt\"" Jan 21 00:10:33 crc kubenswrapper[5118]: E0121 00:10:33.591520 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" podUID="34177974-8d82-49d2-a763-391d0df3bbd8" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.599646 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:32Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.601280 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.607256 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/0541bb33-5d4a-4ef9-964c-884c727499f6-cnibin\") pod \"multus-additional-cni-plugins-d4lsz\" (UID: \"0541bb33-5d4a-4ef9-964c-884c727499f6\") " pod="openshift-multus/multus-additional-cni-plugins-d4lsz" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.607318 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7c0390f5-26b4-4299-958c-acac058be619-system-cni-dir\") pod \"multus-qcqwq\" (UID: \"7c0390f5-26b4-4299-958c-acac058be619\") " pod="openshift-multus/multus-qcqwq" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.607356 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/7c0390f5-26b4-4299-958c-acac058be619-host-var-lib-kubelet\") pod \"multus-qcqwq\" (UID: \"7c0390f5-26b4-4299-958c-acac058be619\") " pod="openshift-multus/multus-qcqwq" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.607406 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/91e46657-55ca-43e7-9a43-6bb875c7debf-env-overrides\") pod \"ovnkube-node-h8fs2\" (UID: \"91e46657-55ca-43e7-9a43-6bb875c7debf\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fs2" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.607445 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/50c45c57-9291-48d3-8022-00a314541104-serviceca\") pod \"node-ca-9sftt\" (UID: \"50c45c57-9291-48d3-8022-00a314541104\") " pod="openshift-image-registry/node-ca-9sftt" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.607479 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/0541bb33-5d4a-4ef9-964c-884c727499f6-cni-binary-copy\") pod \"multus-additional-cni-plugins-d4lsz\" (UID: \"0541bb33-5d4a-4ef9-964c-884c727499f6\") " pod="openshift-multus/multus-additional-cni-plugins-d4lsz" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.607516 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/0541bb33-5d4a-4ef9-964c-884c727499f6-tuning-conf-dir\") pod \"multus-additional-cni-plugins-d4lsz\" (UID: \"0541bb33-5d4a-4ef9-964c-884c727499f6\") " pod="openshift-multus/multus-additional-cni-plugins-d4lsz" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.607556 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/0541bb33-5d4a-4ef9-964c-884c727499f6-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-d4lsz\" (UID: \"0541bb33-5d4a-4ef9-964c-884c727499f6\") " pod="openshift-multus/multus-additional-cni-plugins-d4lsz" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.607588 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7c0390f5-26b4-4299-958c-acac058be619-host-run-netns\") pod \"multus-qcqwq\" (UID: \"7c0390f5-26b4-4299-958c-acac058be619\") " pod="openshift-multus/multus-qcqwq" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.607618 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7c0390f5-26b4-4299-958c-acac058be619-etc-kubernetes\") pod \"multus-qcqwq\" (UID: \"7c0390f5-26b4-4299-958c-acac058be619\") " pod="openshift-multus/multus-qcqwq" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.607672 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ddc3c284-5d85-4e40-b285-f16062ad8d9c-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-kzdr6\" (UID: \"ddc3c284-5d85-4e40-b285-f16062ad8d9c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kzdr6" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.607919 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/91e46657-55ca-43e7-9a43-6bb875c7debf-ovnkube-script-lib\") pod \"ovnkube-node-h8fs2\" (UID: \"91e46657-55ca-43e7-9a43-6bb875c7debf\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fs2" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.607995 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.608066 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/50c45c57-9291-48d3-8022-00a314541104-host\") pod \"node-ca-9sftt\" (UID: \"50c45c57-9291-48d3-8022-00a314541104\") " pod="openshift-image-registry/node-ca-9sftt" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.608103 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/0541bb33-5d4a-4ef9-964c-884c727499f6-os-release\") pod \"multus-additional-cni-plugins-d4lsz\" (UID: \"0541bb33-5d4a-4ef9-964c-884c727499f6\") " pod="openshift-multus/multus-additional-cni-plugins-d4lsz" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.608129 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ddc3c284-5d85-4e40-b285-f16062ad8d9c-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-kzdr6\" (UID: \"ddc3c284-5d85-4e40-b285-f16062ad8d9c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kzdr6" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.608238 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/50c45c57-9291-48d3-8022-00a314541104-host\") pod \"node-ca-9sftt\" (UID: \"50c45c57-9291-48d3-8022-00a314541104\") " pod="openshift-image-registry/node-ca-9sftt" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.608248 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7c0390f5-26b4-4299-958c-acac058be619-os-release\") pod \"multus-qcqwq\" (UID: \"7c0390f5-26b4-4299-958c-acac058be619\") " pod="openshift-multus/multus-qcqwq" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.608315 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/7c0390f5-26b4-4299-958c-acac058be619-multus-daemon-config\") pod \"multus-qcqwq\" (UID: \"7c0390f5-26b4-4299-958c-acac058be619\") " pod="openshift-multus/multus-qcqwq" Jan 21 00:10:33 crc kubenswrapper[5118]: E0121 00:10:33.608423 5118 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 00:10:33 crc kubenswrapper[5118]: E0121 00:10:33.608454 5118 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.608456 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 21 00:10:33 crc kubenswrapper[5118]: E0121 00:10:33.608468 5118 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.608466 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/91e46657-55ca-43e7-9a43-6bb875c7debf-env-overrides\") pod \"ovnkube-node-h8fs2\" (UID: \"91e46657-55ca-43e7-9a43-6bb875c7debf\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fs2" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.608493 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-962nx\" (UniqueName: \"kubernetes.io/projected/acee46d0-3d60-4d08-abbd-b3df00872f90-kube-api-access-962nx\") pod \"node-resolver-znhzw\" (UID: \"acee46d0-3d60-4d08-abbd-b3df00872f90\") " pod="openshift-dns/node-resolver-znhzw" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.608519 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzdws\" (UniqueName: \"kubernetes.io/projected/ddc3c284-5d85-4e40-b285-f16062ad8d9c-kube-api-access-fzdws\") pod \"ovnkube-control-plane-57b78d8988-kzdr6\" (UID: \"ddc3c284-5d85-4e40-b285-f16062ad8d9c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kzdr6" Jan 21 00:10:33 crc kubenswrapper[5118]: E0121 00:10:33.608530 5118 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 00:10:33 crc kubenswrapper[5118]: E0121 00:10:33.608542 5118 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.608545 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vxsjl\" (UniqueName: \"kubernetes.io/projected/44eb9bc7-60a3-421c-bf5e-d1d9a5026435-kube-api-access-vxsjl\") pod \"machine-config-daemon-22r9n\" (UID: \"44eb9bc7-60a3-421c-bf5e-d1d9a5026435\") " pod="openshift-machine-config-operator/machine-config-daemon-22r9n" Jan 21 00:10:33 crc kubenswrapper[5118]: E0121 00:10:33.608552 5118 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 00:10:33 crc kubenswrapper[5118]: E0121 00:10:33.608620 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-21 00:10:34.608598179 +0000 UTC m=+89.932845197 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.608660 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-j5fh5\" (UniqueName: \"kubernetes.io/projected/50c45c57-9291-48d3-8022-00a314541104-kube-api-access-j5fh5\") pod \"node-ca-9sftt\" (UID: \"50c45c57-9291-48d3-8022-00a314541104\") " pod="openshift-image-registry/node-ca-9sftt" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.608694 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/44eb9bc7-60a3-421c-bf5e-d1d9a5026435-rootfs\") pod \"machine-config-daemon-22r9n\" (UID: \"44eb9bc7-60a3-421c-bf5e-d1d9a5026435\") " pod="openshift-machine-config-operator/machine-config-daemon-22r9n" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.608739 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0541bb33-5d4a-4ef9-964c-884c727499f6-system-cni-dir\") pod \"multus-additional-cni-plugins-d4lsz\" (UID: \"0541bb33-5d4a-4ef9-964c-884c727499f6\") " pod="openshift-multus/multus-additional-cni-plugins-d4lsz" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.608764 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjsv4\" (UniqueName: \"kubernetes.io/projected/21105fbf-0225-4ba6-ba90-17808d5250c6-kube-api-access-fjsv4\") pod \"network-metrics-daemon-9hvtf\" (UID: \"21105fbf-0225-4ba6-ba90-17808d5250c6\") " pod="openshift-multus/network-metrics-daemon-9hvtf" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.608785 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7c0390f5-26b4-4299-958c-acac058be619-multus-cni-dir\") pod \"multus-qcqwq\" (UID: \"7c0390f5-26b4-4299-958c-acac058be619\") " pod="openshift-multus/multus-qcqwq" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.608807 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7c0390f5-26b4-4299-958c-acac058be619-cni-binary-copy\") pod \"multus-qcqwq\" (UID: \"7c0390f5-26b4-4299-958c-acac058be619\") " pod="openshift-multus/multus-qcqwq" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.608829 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/7c0390f5-26b4-4299-958c-acac058be619-hostroot\") pod \"multus-qcqwq\" (UID: \"7c0390f5-26b4-4299-958c-acac058be619\") " pod="openshift-multus/multus-qcqwq" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.608870 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/7c0390f5-26b4-4299-958c-acac058be619-host-run-multus-certs\") pod \"multus-qcqwq\" (UID: \"7c0390f5-26b4-4299-958c-acac058be619\") " pod="openshift-multus/multus-qcqwq" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.608900 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/91e46657-55ca-43e7-9a43-6bb875c7debf-ovn-node-metrics-cert\") pod \"ovnkube-node-h8fs2\" (UID: \"91e46657-55ca-43e7-9a43-6bb875c7debf\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fs2" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.608928 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7c0390f5-26b4-4299-958c-acac058be619-cnibin\") pod \"multus-qcqwq\" (UID: \"7c0390f5-26b4-4299-958c-acac058be619\") " pod="openshift-multus/multus-qcqwq" Jan 21 00:10:33 crc kubenswrapper[5118]: E0121 00:10:33.608938 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-21 00:10:34.608927608 +0000 UTC m=+89.933174676 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.608970 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/44eb9bc7-60a3-421c-bf5e-d1d9a5026435-rootfs\") pod \"machine-config-daemon-22r9n\" (UID: \"44eb9bc7-60a3-421c-bf5e-d1d9a5026435\") " pod="openshift-machine-config-operator/machine-config-daemon-22r9n" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.609117 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/91e46657-55ca-43e7-9a43-6bb875c7debf-ovnkube-script-lib\") pod \"ovnkube-node-h8fs2\" (UID: \"91e46657-55ca-43e7-9a43-6bb875c7debf\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fs2" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.609214 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ddc3c284-5d85-4e40-b285-f16062ad8d9c-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-kzdr6\" (UID: \"ddc3c284-5d85-4e40-b285-f16062ad8d9c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kzdr6" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.609349 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/44eb9bc7-60a3-421c-bf5e-d1d9a5026435-proxy-tls\") pod \"machine-config-daemon-22r9n\" (UID: \"44eb9bc7-60a3-421c-bf5e-d1d9a5026435\") " pod="openshift-machine-config-operator/machine-config-daemon-22r9n" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.609388 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/21105fbf-0225-4ba6-ba90-17808d5250c6-metrics-certs\") pod \"network-metrics-daemon-9hvtf\" (UID: \"21105fbf-0225-4ba6-ba90-17808d5250c6\") " pod="openshift-multus/network-metrics-daemon-9hvtf" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.609408 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7c0390f5-26b4-4299-958c-acac058be619-host-var-lib-cni-bin\") pod \"multus-qcqwq\" (UID: \"7c0390f5-26b4-4299-958c-acac058be619\") " pod="openshift-multus/multus-qcqwq" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.609469 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/acee46d0-3d60-4d08-abbd-b3df00872f90-tmp-dir\") pod \"node-resolver-znhzw\" (UID: \"acee46d0-3d60-4d08-abbd-b3df00872f90\") " pod="openshift-dns/node-resolver-znhzw" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.609535 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/44eb9bc7-60a3-421c-bf5e-d1d9a5026435-mcd-auth-proxy-config\") pod \"machine-config-daemon-22r9n\" (UID: \"44eb9bc7-60a3-421c-bf5e-d1d9a5026435\") " pod="openshift-machine-config-operator/machine-config-daemon-22r9n" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.609569 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/0541bb33-5d4a-4ef9-964c-884c727499f6-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-d4lsz\" (UID: \"0541bb33-5d4a-4ef9-964c-884c727499f6\") " pod="openshift-multus/multus-additional-cni-plugins-d4lsz" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.609658 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/7c0390f5-26b4-4299-958c-acac058be619-host-run-k8s-cni-cncf-io\") pod \"multus-qcqwq\" (UID: \"7c0390f5-26b4-4299-958c-acac058be619\") " pod="openshift-multus/multus-qcqwq" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.609709 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/91e46657-55ca-43e7-9a43-6bb875c7debf-ovnkube-config\") pod \"ovnkube-node-h8fs2\" (UID: \"91e46657-55ca-43e7-9a43-6bb875c7debf\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fs2" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.610301 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/50c45c57-9291-48d3-8022-00a314541104-serviceca\") pod \"node-ca-9sftt\" (UID: \"50c45c57-9291-48d3-8022-00a314541104\") " pod="openshift-image-registry/node-ca-9sftt" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.610344 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/91e46657-55ca-43e7-9a43-6bb875c7debf-ovnkube-config\") pod \"ovnkube-node-h8fs2\" (UID: \"91e46657-55ca-43e7-9a43-6bb875c7debf\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fs2" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.610365 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmjvj\" (UniqueName: \"kubernetes.io/projected/0541bb33-5d4a-4ef9-964c-884c727499f6-kube-api-access-qmjvj\") pod \"multus-additional-cni-plugins-d4lsz\" (UID: \"0541bb33-5d4a-4ef9-964c-884c727499f6\") " pod="openshift-multus/multus-additional-cni-plugins-d4lsz" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.610507 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/44eb9bc7-60a3-421c-bf5e-d1d9a5026435-mcd-auth-proxy-config\") pod \"machine-config-daemon-22r9n\" (UID: \"44eb9bc7-60a3-421c-bf5e-d1d9a5026435\") " pod="openshift-machine-config-operator/machine-config-daemon-22r9n" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.610760 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/7c0390f5-26b4-4299-958c-acac058be619-host-var-lib-cni-multus\") pod \"multus-qcqwq\" (UID: \"7c0390f5-26b4-4299-958c-acac058be619\") " pod="openshift-multus/multus-qcqwq" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.610807 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5t5k\" (UniqueName: \"kubernetes.io/projected/7c0390f5-26b4-4299-958c-acac058be619-kube-api-access-j5t5k\") pod \"multus-qcqwq\" (UID: \"7c0390f5-26b4-4299-958c-acac058be619\") " pod="openshift-multus/multus-qcqwq" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.610825 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/acee46d0-3d60-4d08-abbd-b3df00872f90-hosts-file\") pod \"node-resolver-znhzw\" (UID: \"acee46d0-3d60-4d08-abbd-b3df00872f90\") " pod="openshift-dns/node-resolver-znhzw" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.610849 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/7c0390f5-26b4-4299-958c-acac058be619-multus-socket-dir-parent\") pod \"multus-qcqwq\" (UID: \"7c0390f5-26b4-4299-958c-acac058be619\") " pod="openshift-multus/multus-qcqwq" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.610869 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7c0390f5-26b4-4299-958c-acac058be619-multus-conf-dir\") pod \"multus-qcqwq\" (UID: \"7c0390f5-26b4-4299-958c-acac058be619\") " pod="openshift-multus/multus-qcqwq" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.612904 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:32Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.615676 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/44eb9bc7-60a3-421c-bf5e-d1d9a5026435-proxy-tls\") pod \"machine-config-daemon-22r9n\" (UID: \"44eb9bc7-60a3-421c-bf5e-d1d9a5026435\") " pod="openshift-machine-config-operator/machine-config-daemon-22r9n" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.618308 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/91e46657-55ca-43e7-9a43-6bb875c7debf-ovn-node-metrics-cert\") pod \"ovnkube-node-h8fs2\" (UID: \"91e46657-55ca-43e7-9a43-6bb875c7debf\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fs2" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.624850 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.625544 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5fh5\" (UniqueName: \"kubernetes.io/projected/50c45c57-9291-48d3-8022-00a314541104-kube-api-access-j5fh5\") pod \"node-ca-9sftt\" (UID: \"50c45c57-9291-48d3-8022-00a314541104\") " pod="openshift-image-registry/node-ca-9sftt" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.626101 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxsjl\" (UniqueName: \"kubernetes.io/projected/44eb9bc7-60a3-421c-bf5e-d1d9a5026435-kube-api-access-vxsjl\") pod \"machine-config-daemon-22r9n\" (UID: \"44eb9bc7-60a3-421c-bf5e-d1d9a5026435\") " pod="openshift-machine-config-operator/machine-config-daemon-22r9n" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.633958 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kzdr6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddc3c284-5d85-4e40-b285-f16062ad8d9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzdws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzdws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:10:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-kzdr6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.634268 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.634333 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.634347 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.634365 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.634397 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:33Z","lastTransitionTime":"2026-01-21T00:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.643147 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-znhzw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acee46d0-3d60-4d08-abbd-b3df00872f90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-962nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:10:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-znhzw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.652558 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff4c6b76-3326-4edc-a392-9edcaf197d60\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://69e28ae0052129054be6c0419161beea094bafc8c1cbcdcf5bf3436e7877d421\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b22458e11edc4b0d627c073d6b85af24b2b8e00bc13359490fd9ffac95677b1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b22458e11edc4b0d627c073d6b85af24b2b8e00bc13359490fd9ffac95677b1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:09:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:09:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:09:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.662978 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.672998 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:32Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.684628 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d4lsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0541bb33-5d4a-4ef9-964c-884c727499f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:10:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d4lsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.687249 5118 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.696224 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82b75e4d-eb03-4a0f-b349-9596c36b1f7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://fe7f2363564030a94c2b5aaa62bb4c132cd16d168cb151c5b3b9447b2e1be495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e4ebe604136e4069bea4c85be28e37c74101654b01f8f97f9dd6c5e6d2089369\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://958df862a38bf6ffb6e4b273a5f651ea46a717132f869a0fffedeb2ed05ad6a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4192feefd5cfa0d044d233324b963c0d52fc469217c4faad785dda349a30a38b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4192feefd5cfa0d044d233324b963c0d52fc469217c4faad785dda349a30a38b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T00:10:07Z\\\",\\\"message\\\":\\\"172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nW0121 00:10:06.988883 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 00:10:06.989033 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0121 00:10:06.989980 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1710353325/tls.crt::/tmp/serving-cert-1710353325/tls.key\\\\\\\"\\\\nI0121 00:10:07.300917 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 00:10:07.302784 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 00:10:07.302805 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 00:10:07.302832 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 00:10:07.302838 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 00:10:07.306381 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 00:10:07.306408 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 00:10:07.306414 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 00:10:07.306419 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 00:10:07.306422 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 00:10:07.306426 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 00:10:07.306429 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 00:10:07.306560 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 00:10:07.307535 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T00:10:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5420e1997e1749ea320776a71a52444ce92ddbd5ca9427f3c9efe5b951c27370\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://fe33373ea281d91b409c5cb5d649102effdff8d81e629af326cf6ec982abd031\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe33373ea281d91b409c5cb5d649102effdff8d81e629af326cf6ec982abd031\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:09:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:09:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:09:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.706485 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:32Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.711849 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.711899 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.711929 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.711960 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.711989 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.712011 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.712035 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.712059 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.712082 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.712107 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.712130 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99zj9\" (UniqueName: \"kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.712173 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.712200 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hm9x7\" (UniqueName: \"kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.712224 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wj4qr\" (UniqueName: \"kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.712249 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.712274 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5rsr\" (UniqueName: \"kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.712314 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.712338 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.712359 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.712379 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.712368 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.712400 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.712426 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.712479 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.712610 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4smf\" (UniqueName: \"kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf\") pod \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\" (UID: \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.712641 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.712668 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls\") pod \"42a11a02-47e1-488f-b270-2679d3298b0e\" (UID: \"42a11a02-47e1-488f-b270-2679d3298b0e\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.713052 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.713208 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.713217 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.713243 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.713266 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.713293 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgrkj\" (UniqueName: \"kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj\") pod \"42a11a02-47e1-488f-b270-2679d3298b0e\" (UID: \"42a11a02-47e1-488f-b270-2679d3298b0e\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.713441 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.713460 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.713589 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.713575 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities" (OuterVolumeSpecName: "utilities") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.713911 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.713929 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.714136 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config" (OuterVolumeSpecName: "config") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.714146 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr" (OuterVolumeSpecName: "kube-api-access-wj4qr") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "kube-api-access-wj4qr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.714332 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.714376 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.714420 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.714736 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7" (OuterVolumeSpecName: "kube-api-access-hm9x7") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "kube-api-access-hm9x7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.714747 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.714739 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf" (OuterVolumeSpecName: "kube-api-access-q4smf") pod "0dd0fbac-8c0d-4228-8faa-abbeedabf7db" (UID: "0dd0fbac-8c0d-4228-8faa-abbeedabf7db"). InnerVolumeSpecName "kube-api-access-q4smf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.714781 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.714888 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca" (OuterVolumeSpecName: "service-ca") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.714899 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config" (OuterVolumeSpecName: "config") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.715018 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "42a11a02-47e1-488f-b270-2679d3298b0e" (UID: "42a11a02-47e1-488f-b270-2679d3298b0e"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.715083 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.715174 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr" (OuterVolumeSpecName: "kube-api-access-z5rsr") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "kube-api-access-z5rsr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.715195 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9" (OuterVolumeSpecName: "kube-api-access-99zj9") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "kube-api-access-99zj9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.715280 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.715307 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.715332 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnxbn\" (UniqueName: \"kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.715356 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj" (OuterVolumeSpecName: "kube-api-access-qgrkj") pod "42a11a02-47e1-488f-b270-2679d3298b0e" (UID: "42a11a02-47e1-488f-b270-2679d3298b0e"). InnerVolumeSpecName "kube-api-access-qgrkj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.715425 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfp5s\" (UniqueName: \"kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.715457 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.715476 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.715494 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.715514 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.715531 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9stx\" (UniqueName: \"kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.715547 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ws8zz\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.715566 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5lgh\" (UniqueName: \"kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.715583 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.715769 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s" (OuterVolumeSpecName: "kube-api-access-xfp5s") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "kube-api-access-xfp5s". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.715794 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.715801 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.715851 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.715867 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images" (OuterVolumeSpecName: "images") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.715909 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.715933 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.715953 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vsz9\" (UniqueName: \"kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.715970 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.715986 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.715996 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.716003 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.716025 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.716042 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.716059 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.716076 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.716093 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.716110 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.716126 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.716142 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.716123 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz" (OuterVolumeSpecName: "kube-api-access-ws8zz") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "kube-api-access-ws8zz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.716171 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.716189 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.716207 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.716225 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.716242 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.716262 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.716280 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.716300 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.716365 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.716381 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hckvg\" (UniqueName: \"kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.716397 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.716415 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.716433 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.716449 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.716466 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.716483 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g4lr\" (UniqueName: \"kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr\") pod \"f7e2c886-118e-43bb-bef1-c78134de392b\" (UID: \"f7e2c886-118e-43bb-bef1-c78134de392b\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.716499 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.716518 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.716534 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbmqg\" (UniqueName: \"kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.716552 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nspp\" (UniqueName: \"kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.716571 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94l9h\" (UniqueName: \"kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.716589 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.716606 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ks6v2\" (UniqueName: \"kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.716620 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.716637 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.716654 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.716696 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m26jq\" (UniqueName: \"kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.716712 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.716729 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.716747 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgx6b\" (UniqueName: \"kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b\") pod \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\" (UID: \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.716763 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9z4sw\" (UniqueName: \"kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.716780 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.716798 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.716815 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.716831 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.716849 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pddnv\" (UniqueName: \"kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv\") pod \"e093be35-bb62-4843-b2e8-094545761610\" (UID: \"e093be35-bb62-4843-b2e8-094545761610\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.716870 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.716887 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.716905 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.716920 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.716939 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.716957 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.716973 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.716989 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.717004 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.717020 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.717036 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l87hs\" (UniqueName: \"kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs\") pod \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\" (UID: \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.717056 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.717076 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nb9c\" (UniqueName: \"kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.717093 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkdh6\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.717111 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.717130 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w94wk\" (UniqueName: \"kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.717146 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.717184 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dmhf\" (UniqueName: \"kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.717201 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7cps\" (UniqueName: \"kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps\") pod \"af41de71-79cf-4590-bbe9-9e8b848862cb\" (UID: \"af41de71-79cf-4590-bbe9-9e8b848862cb\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.717217 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4g8ts\" (UniqueName: \"kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.716187 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.717233 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.716202 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx" (OuterVolumeSpecName: "kube-api-access-l9stx") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "kube-api-access-l9stx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.717234 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap" (OuterVolumeSpecName: "whereabouts-flatfile-configmap") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "whereabouts-flatfile-configmap". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.716243 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities" (OuterVolumeSpecName: "utilities") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.716325 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh" (OuterVolumeSpecName: "kube-api-access-m5lgh") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "kube-api-access-m5lgh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.716473 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.716533 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities" (OuterVolumeSpecName: "utilities") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.716820 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.717090 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn" (OuterVolumeSpecName: "kube-api-access-xnxbn") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "kube-api-access-xnxbn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.717111 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.717253 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26xrl\" (UniqueName: \"kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl\") pod \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\" (UID: \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.717365 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.717392 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.717403 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg" (OuterVolumeSpecName: "kube-api-access-hckvg") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "kube-api-access-hckvg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.717415 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.717454 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.717486 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.717518 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tknt7\" (UniqueName: \"kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.717541 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.717568 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.717701 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.717795 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.717817 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.717844 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.717861 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.717876 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.717895 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqbfk\" (UniqueName: \"kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.717919 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hb7m\" (UniqueName: \"kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.717936 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.717953 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjwtd\" (UniqueName: \"kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.717969 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.717986 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.718003 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.718021 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.718039 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lcfw\" (UniqueName: \"kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.718055 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.718073 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.718096 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.718121 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.718145 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.718185 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config" (OuterVolumeSpecName: "console-config") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.718189 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.718317 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.718343 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.718366 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfzkj\" (UniqueName: \"kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj\") pod \"0effdbcf-dd7d-404d-9d48-77536d665a5d\" (UID: \"0effdbcf-dd7d-404d-9d48-77536d665a5d\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.718497 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.718503 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config" (OuterVolumeSpecName: "config") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.718519 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.718579 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grwfz\" (UniqueName: \"kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.718605 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.718625 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.718647 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptkcf\" (UniqueName: \"kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.718667 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.718665 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images" (OuterVolumeSpecName: "images") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.718678 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.718715 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.718777 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxfcv\" (UniqueName: \"kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.718820 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pskd\" (UniqueName: \"kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.718694 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.718888 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.718909 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.718927 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.718936 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9" (OuterVolumeSpecName: "kube-api-access-9vsz9") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "kube-api-access-9vsz9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.719079 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.719259 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.719388 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.719434 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.718948 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.719620 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.719618 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m" (OuterVolumeSpecName: "kube-api-access-4hb7m") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "kube-api-access-4hb7m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.719666 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config" (OuterVolumeSpecName: "config") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.719675 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.719729 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.719749 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.719764 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jjkz\" (UniqueName: \"kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.719737 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.719795 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.719972 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs\") pod \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\" (UID: \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.720006 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert\") pod \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\" (UID: \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.720038 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.720068 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.720095 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.720121 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.720145 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 21 00:10:33 crc kubenswrapper[5118]: E0121 00:10:33.720209 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 00:10:34.220181672 +0000 UTC m=+89.544428890 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.720204 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.720229 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.720265 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.720296 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.720326 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.720355 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.720371 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.720344 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.720383 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pllx6\" (UniqueName: \"kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.720441 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert\") pod \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\" (UID: \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.720472 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.720498 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.720525 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftwb6\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.720556 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddlk9\" (UniqueName: \"kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.720585 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.720611 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.720642 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.720667 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rmnv\" (UniqueName: \"kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.720693 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.720719 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.720745 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir\") pod \"f7e2c886-118e-43bb-bef1-c78134de392b\" (UID: \"f7e2c886-118e-43bb-bef1-c78134de392b\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.720771 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.720798 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsb9b\" (UniqueName: \"kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.720827 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.720854 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca\") pod \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\" (UID: \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.720886 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.720915 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.720944 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.720968 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.720991 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.721019 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.721046 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.721075 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dztfv\" (UniqueName: \"kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.721108 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.721134 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.721183 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twvbl\" (UniqueName: \"kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.721215 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbc2l\" (UniqueName: \"kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.721244 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs\") pod \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\" (UID: \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.721545 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.721794 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.721981 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.722037 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.722122 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.722310 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca" (OuterVolumeSpecName: "client-ca") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.722312 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.722425 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume" (OuterVolumeSpecName: "config-volume") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.722670 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config" (OuterVolumeSpecName: "config") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.722724 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.722735 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.722734 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.722821 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.722858 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zth6t\" (UniqueName: \"kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.722892 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4tqq\" (UniqueName: \"kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq\") pod \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\" (UID: \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.722920 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzt4w\" (UniqueName: \"kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w\") pod \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\" (UID: \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.722921 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.722952 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.723011 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.723241 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.723294 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.723318 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.723339 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.723347 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.723363 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities" (OuterVolumeSpecName: "utilities") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.723381 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.723602 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf" (OuterVolumeSpecName: "kube-api-access-ptkcf") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "kube-api-access-ptkcf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.723618 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls\") pod \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\" (UID: \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.723639 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.725312 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmmzf\" (UniqueName: \"kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.725388 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.725411 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zg8nc\" (UniqueName: \"kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.725435 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.725515 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7c0390f5-26b4-4299-958c-acac058be619-cni-binary-copy\") pod \"multus-qcqwq\" (UID: \"7c0390f5-26b4-4299-958c-acac058be619\") " pod="openshift-multus/multus-qcqwq" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.725544 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/7c0390f5-26b4-4299-958c-acac058be619-hostroot\") pod \"multus-qcqwq\" (UID: \"7c0390f5-26b4-4299-958c-acac058be619\") " pod="openshift-multus/multus-qcqwq" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.725569 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/7c0390f5-26b4-4299-958c-acac058be619-host-run-multus-certs\") pod \"multus-qcqwq\" (UID: \"7c0390f5-26b4-4299-958c-acac058be619\") " pod="openshift-multus/multus-qcqwq" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.725594 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7c0390f5-26b4-4299-958c-acac058be619-cnibin\") pod \"multus-qcqwq\" (UID: \"7c0390f5-26b4-4299-958c-acac058be619\") " pod="openshift-multus/multus-qcqwq" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.725636 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bfh6k\" (UniqueName: \"kubernetes.io/projected/91e46657-55ca-43e7-9a43-6bb875c7debf-kube-api-access-bfh6k\") pod \"ovnkube-node-h8fs2\" (UID: \"91e46657-55ca-43e7-9a43-6bb875c7debf\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fs2" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.725704 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ddc3c284-5d85-4e40-b285-f16062ad8d9c-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-kzdr6\" (UID: \"ddc3c284-5d85-4e40-b285-f16062ad8d9c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kzdr6" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.725733 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/21105fbf-0225-4ba6-ba90-17808d5250c6-metrics-certs\") pod \"network-metrics-daemon-9hvtf\" (UID: \"21105fbf-0225-4ba6-ba90-17808d5250c6\") " pod="openshift-multus/network-metrics-daemon-9hvtf" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.725751 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7c0390f5-26b4-4299-958c-acac058be619-host-var-lib-cni-bin\") pod \"multus-qcqwq\" (UID: \"7c0390f5-26b4-4299-958c-acac058be619\") " pod="openshift-multus/multus-qcqwq" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.725773 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/acee46d0-3d60-4d08-abbd-b3df00872f90-tmp-dir\") pod \"node-resolver-znhzw\" (UID: \"acee46d0-3d60-4d08-abbd-b3df00872f90\") " pod="openshift-dns/node-resolver-znhzw" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.725810 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/0541bb33-5d4a-4ef9-964c-884c727499f6-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-d4lsz\" (UID: \"0541bb33-5d4a-4ef9-964c-884c727499f6\") " pod="openshift-multus/multus-additional-cni-plugins-d4lsz" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.725837 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/7c0390f5-26b4-4299-958c-acac058be619-host-run-k8s-cni-cncf-io\") pod \"multus-qcqwq\" (UID: \"7c0390f5-26b4-4299-958c-acac058be619\") " pod="openshift-multus/multus-qcqwq" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.725883 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qmjvj\" (UniqueName: \"kubernetes.io/projected/0541bb33-5d4a-4ef9-964c-884c727499f6-kube-api-access-qmjvj\") pod \"multus-additional-cni-plugins-d4lsz\" (UID: \"0541bb33-5d4a-4ef9-964c-884c727499f6\") " pod="openshift-multus/multus-additional-cni-plugins-d4lsz" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.725910 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/7c0390f5-26b4-4299-958c-acac058be619-host-var-lib-cni-multus\") pod \"multus-qcqwq\" (UID: \"7c0390f5-26b4-4299-958c-acac058be619\") " pod="openshift-multus/multus-qcqwq" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.725932 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-j5t5k\" (UniqueName: \"kubernetes.io/projected/7c0390f5-26b4-4299-958c-acac058be619-kube-api-access-j5t5k\") pod \"multus-qcqwq\" (UID: \"7c0390f5-26b4-4299-958c-acac058be619\") " pod="openshift-multus/multus-qcqwq" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.725952 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/acee46d0-3d60-4d08-abbd-b3df00872f90-hosts-file\") pod \"node-resolver-znhzw\" (UID: \"acee46d0-3d60-4d08-abbd-b3df00872f90\") " pod="openshift-dns/node-resolver-znhzw" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.725971 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/7c0390f5-26b4-4299-958c-acac058be619-multus-socket-dir-parent\") pod \"multus-qcqwq\" (UID: \"7c0390f5-26b4-4299-958c-acac058be619\") " pod="openshift-multus/multus-qcqwq" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.725989 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7c0390f5-26b4-4299-958c-acac058be619-multus-conf-dir\") pod \"multus-qcqwq\" (UID: \"7c0390f5-26b4-4299-958c-acac058be619\") " pod="openshift-multus/multus-qcqwq" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.726010 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/0541bb33-5d4a-4ef9-964c-884c727499f6-cnibin\") pod \"multus-additional-cni-plugins-d4lsz\" (UID: \"0541bb33-5d4a-4ef9-964c-884c727499f6\") " pod="openshift-multus/multus-additional-cni-plugins-d4lsz" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.726026 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7c0390f5-26b4-4299-958c-acac058be619-system-cni-dir\") pod \"multus-qcqwq\" (UID: \"7c0390f5-26b4-4299-958c-acac058be619\") " pod="openshift-multus/multus-qcqwq" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.726045 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/7c0390f5-26b4-4299-958c-acac058be619-host-var-lib-kubelet\") pod \"multus-qcqwq\" (UID: \"7c0390f5-26b4-4299-958c-acac058be619\") " pod="openshift-multus/multus-qcqwq" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.726087 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/0541bb33-5d4a-4ef9-964c-884c727499f6-cni-binary-copy\") pod \"multus-additional-cni-plugins-d4lsz\" (UID: \"0541bb33-5d4a-4ef9-964c-884c727499f6\") " pod="openshift-multus/multus-additional-cni-plugins-d4lsz" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.726206 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/0541bb33-5d4a-4ef9-964c-884c727499f6-tuning-conf-dir\") pod \"multus-additional-cni-plugins-d4lsz\" (UID: \"0541bb33-5d4a-4ef9-964c-884c727499f6\") " pod="openshift-multus/multus-additional-cni-plugins-d4lsz" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.726240 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/0541bb33-5d4a-4ef9-964c-884c727499f6-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-d4lsz\" (UID: \"0541bb33-5d4a-4ef9-964c-884c727499f6\") " pod="openshift-multus/multus-additional-cni-plugins-d4lsz" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.726258 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7c0390f5-26b4-4299-958c-acac058be619-host-run-netns\") pod \"multus-qcqwq\" (UID: \"7c0390f5-26b4-4299-958c-acac058be619\") " pod="openshift-multus/multus-qcqwq" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.726276 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7c0390f5-26b4-4299-958c-acac058be619-etc-kubernetes\") pod \"multus-qcqwq\" (UID: \"7c0390f5-26b4-4299-958c-acac058be619\") " pod="openshift-multus/multus-qcqwq" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.726314 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ddc3c284-5d85-4e40-b285-f16062ad8d9c-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-kzdr6\" (UID: \"ddc3c284-5d85-4e40-b285-f16062ad8d9c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kzdr6" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.726349 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/0541bb33-5d4a-4ef9-964c-884c727499f6-os-release\") pod \"multus-additional-cni-plugins-d4lsz\" (UID: \"0541bb33-5d4a-4ef9-964c-884c727499f6\") " pod="openshift-multus/multus-additional-cni-plugins-d4lsz" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.726369 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ddc3c284-5d85-4e40-b285-f16062ad8d9c-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-kzdr6\" (UID: \"ddc3c284-5d85-4e40-b285-f16062ad8d9c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kzdr6" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.726400 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7c0390f5-26b4-4299-958c-acac058be619-os-release\") pod \"multus-qcqwq\" (UID: \"7c0390f5-26b4-4299-958c-acac058be619\") " pod="openshift-multus/multus-qcqwq" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.726417 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/7c0390f5-26b4-4299-958c-acac058be619-multus-daemon-config\") pod \"multus-qcqwq\" (UID: \"7c0390f5-26b4-4299-958c-acac058be619\") " pod="openshift-multus/multus-qcqwq" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.726468 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-962nx\" (UniqueName: \"kubernetes.io/projected/acee46d0-3d60-4d08-abbd-b3df00872f90-kube-api-access-962nx\") pod \"node-resolver-znhzw\" (UID: \"acee46d0-3d60-4d08-abbd-b3df00872f90\") " pod="openshift-dns/node-resolver-znhzw" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.726487 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fzdws\" (UniqueName: \"kubernetes.io/projected/ddc3c284-5d85-4e40-b285-f16062ad8d9c-kube-api-access-fzdws\") pod \"ovnkube-control-plane-57b78d8988-kzdr6\" (UID: \"ddc3c284-5d85-4e40-b285-f16062ad8d9c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kzdr6" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.726516 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0541bb33-5d4a-4ef9-964c-884c727499f6-system-cni-dir\") pod \"multus-additional-cni-plugins-d4lsz\" (UID: \"0541bb33-5d4a-4ef9-964c-884c727499f6\") " pod="openshift-multus/multus-additional-cni-plugins-d4lsz" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.726534 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fjsv4\" (UniqueName: \"kubernetes.io/projected/21105fbf-0225-4ba6-ba90-17808d5250c6-kube-api-access-fjsv4\") pod \"network-metrics-daemon-9hvtf\" (UID: \"21105fbf-0225-4ba6-ba90-17808d5250c6\") " pod="openshift-multus/network-metrics-daemon-9hvtf" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.726554 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7c0390f5-26b4-4299-958c-acac058be619-multus-cni-dir\") pod \"multus-qcqwq\" (UID: \"7c0390f5-26b4-4299-958c-acac058be619\") " pod="openshift-multus/multus-qcqwq" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.726624 5118 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.726635 5118 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.726645 5118 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.726656 5118 reconciler_common.go:299] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.726665 5118 reconciler_common.go:299] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.726675 5118 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.726684 5118 reconciler_common.go:299] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.726694 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4hb7m\" (UniqueName: \"kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.726703 5118 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.726713 5118 reconciler_common.go:299] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.726722 5118 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.726732 5118 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.726742 5118 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.726752 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ptkcf\" (UniqueName: \"kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.726761 5118 reconciler_common.go:299] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.726771 5118 reconciler_common.go:299] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.726781 5118 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.726791 5118 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.726801 5118 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.726810 5118 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.726821 5118 reconciler_common.go:299] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.726831 5118 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.726844 5118 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.726854 5118 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.726863 5118 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.726872 5118 reconciler_common.go:299] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.726882 5118 reconciler_common.go:299] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.726892 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-99zj9\" (UniqueName: \"kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.726901 5118 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.726912 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hm9x7\" (UniqueName: \"kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.726923 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wj4qr\" (UniqueName: \"kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.726972 5118 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.726983 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z5rsr\" (UniqueName: \"kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.726993 5118 reconciler_common.go:299] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.727002 5118 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.727014 5118 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.727027 5118 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.727036 5118 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.727045 5118 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.727054 5118 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.727064 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q4smf\" (UniqueName: \"kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.727074 5118 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.727084 5118 reconciler_common.go:299] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.727094 5118 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.727103 5118 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.727112 5118 reconciler_common.go:299] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.727123 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qgrkj\" (UniqueName: \"kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.727132 5118 reconciler_common.go:299] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.727141 5118 reconciler_common.go:299] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.727151 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xnxbn\" (UniqueName: \"kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.727207 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xfp5s\" (UniqueName: \"kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.727216 5118 reconciler_common.go:299] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.727225 5118 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.727237 5118 reconciler_common.go:299] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.727246 5118 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.727257 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l9stx\" (UniqueName: \"kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.727267 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ws8zz\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.727277 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m5lgh\" (UniqueName: \"kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.727287 5118 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.727297 5118 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.727307 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9vsz9\" (UniqueName: \"kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.727317 5118 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.727319 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/7c0390f5-26b4-4299-958c-acac058be619-multus-socket-dir-parent\") pod \"multus-qcqwq\" (UID: \"7c0390f5-26b4-4299-958c-acac058be619\") " pod="openshift-multus/multus-qcqwq" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.727326 5118 reconciler_common.go:299] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.727337 5118 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.727352 5118 reconciler_common.go:299] "Volume detached for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.727367 5118 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.727378 5118 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.727384 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7c0390f5-26b4-4299-958c-acac058be619-multus-cni-dir\") pod \"multus-qcqwq\" (UID: \"7c0390f5-26b4-4299-958c-acac058be619\") " pod="openshift-multus/multus-qcqwq" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.727389 5118 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.723743 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.723832 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.724199 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.727410 5118 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.727421 5118 reconciler_common.go:299] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.727430 5118 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.727440 5118 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.727448 5118 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.727458 5118 reconciler_common.go:299] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.727468 5118 reconciler_common.go:299] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.727478 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hckvg\" (UniqueName: \"kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.724311 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.724592 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6" (OuterVolumeSpecName: "kube-api-access-pllx6") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "kube-api-access-pllx6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.724744 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.724845 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.725027 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv" (OuterVolumeSpecName: "kube-api-access-xxfcv") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "kube-api-access-xxfcv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.725174 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz" (OuterVolumeSpecName: "kube-api-access-grwfz") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "kube-api-access-grwfz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.725247 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.725263 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.725330 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw" (OuterVolumeSpecName: "kube-api-access-5lcfw") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "kube-api-access-5lcfw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.725403 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.725421 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config" (OuterVolumeSpecName: "config") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.727581 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7c0390f5-26b4-4299-958c-acac058be619-multus-conf-dir\") pod \"multus-qcqwq\" (UID: \"7c0390f5-26b4-4299-958c-acac058be619\") " pod="openshift-multus/multus-qcqwq" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.727620 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7c0390f5-26b4-4299-958c-acac058be619-system-cni-dir\") pod \"multus-qcqwq\" (UID: \"7c0390f5-26b4-4299-958c-acac058be619\") " pod="openshift-multus/multus-qcqwq" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.727674 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/7c0390f5-26b4-4299-958c-acac058be619-host-var-lib-kubelet\") pod \"multus-qcqwq\" (UID: \"7c0390f5-26b4-4299-958c-acac058be619\") " pod="openshift-multus/multus-qcqwq" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.728481 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/0541bb33-5d4a-4ef9-964c-884c727499f6-cnibin\") pod \"multus-additional-cni-plugins-d4lsz\" (UID: \"0541bb33-5d4a-4ef9-964c-884c727499f6\") " pod="openshift-multus/multus-additional-cni-plugins-d4lsz" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.729650 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/0541bb33-5d4a-4ef9-964c-884c727499f6-tuning-conf-dir\") pod \"multus-additional-cni-plugins-d4lsz\" (UID: \"0541bb33-5d4a-4ef9-964c-884c727499f6\") " pod="openshift-multus/multus-additional-cni-plugins-d4lsz" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.730822 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7c0390f5-26b4-4299-958c-acac058be619-host-run-netns\") pod \"multus-qcqwq\" (UID: \"7c0390f5-26b4-4299-958c-acac058be619\") " pod="openshift-multus/multus-qcqwq" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.730860 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7c0390f5-26b4-4299-958c-acac058be619-etc-kubernetes\") pod \"multus-qcqwq\" (UID: \"7c0390f5-26b4-4299-958c-acac058be619\") " pod="openshift-multus/multus-qcqwq" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.731357 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ddc3c284-5d85-4e40-b285-f16062ad8d9c-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-kzdr6\" (UID: \"ddc3c284-5d85-4e40-b285-f16062ad8d9c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kzdr6" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.731505 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/0541bb33-5d4a-4ef9-964c-884c727499f6-os-release\") pod \"multus-additional-cni-plugins-d4lsz\" (UID: \"0541bb33-5d4a-4ef9-964c-884c727499f6\") " pod="openshift-multus/multus-additional-cni-plugins-d4lsz" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.732521 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7c0390f5-26b4-4299-958c-acac058be619-os-release\") pod \"multus-qcqwq\" (UID: \"7c0390f5-26b4-4299-958c-acac058be619\") " pod="openshift-multus/multus-qcqwq" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.732934 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/7c0390f5-26b4-4299-958c-acac058be619-hostroot\") pod \"multus-qcqwq\" (UID: \"7c0390f5-26b4-4299-958c-acac058be619\") " pod="openshift-multus/multus-qcqwq" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.732986 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/7c0390f5-26b4-4299-958c-acac058be619-host-run-multus-certs\") pod \"multus-qcqwq\" (UID: \"7c0390f5-26b4-4299-958c-acac058be619\") " pod="openshift-multus/multus-qcqwq" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.733039 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7c0390f5-26b4-4299-958c-acac058be619-cnibin\") pod \"multus-qcqwq\" (UID: \"7c0390f5-26b4-4299-958c-acac058be619\") " pod="openshift-multus/multus-qcqwq" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.733432 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0541bb33-5d4a-4ef9-964c-884c727499f6-system-cni-dir\") pod \"multus-additional-cni-plugins-d4lsz\" (UID: \"0541bb33-5d4a-4ef9-964c-884c727499f6\") " pod="openshift-multus/multus-additional-cni-plugins-d4lsz" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.733591 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/7c0390f5-26b4-4299-958c-acac058be619-host-run-k8s-cni-cncf-io\") pod \"multus-qcqwq\" (UID: \"7c0390f5-26b4-4299-958c-acac058be619\") " pod="openshift-multus/multus-qcqwq" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.733894 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/7c0390f5-26b4-4299-958c-acac058be619-host-var-lib-cni-multus\") pod \"multus-qcqwq\" (UID: \"7c0390f5-26b4-4299-958c-acac058be619\") " pod="openshift-multus/multus-qcqwq" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.734225 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7c0390f5-26b4-4299-958c-acac058be619-cni-binary-copy\") pod \"multus-qcqwq\" (UID: \"7c0390f5-26b4-4299-958c-acac058be619\") " pod="openshift-multus/multus-qcqwq" Jan 21 00:10:33 crc kubenswrapper[5118]: E0121 00:10:33.734245 5118 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.734319 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7c0390f5-26b4-4299-958c-acac058be619-host-var-lib-cni-bin\") pod \"multus-qcqwq\" (UID: \"7c0390f5-26b4-4299-958c-acac058be619\") " pod="openshift-multus/multus-qcqwq" Jan 21 00:10:33 crc kubenswrapper[5118]: E0121 00:10:33.734371 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/21105fbf-0225-4ba6-ba90-17808d5250c6-metrics-certs podName:21105fbf-0225-4ba6-ba90-17808d5250c6 nodeName:}" failed. No retries permitted until 2026-01-21 00:10:34.234348228 +0000 UTC m=+89.558595316 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/21105fbf-0225-4ba6-ba90-17808d5250c6-metrics-certs") pod "network-metrics-daemon-9hvtf" (UID: "21105fbf-0225-4ba6-ba90-17808d5250c6") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.734415 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6" (OuterVolumeSpecName: "kube-api-access-ftwb6") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "kube-api-access-ftwb6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.734484 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h" (OuterVolumeSpecName: "kube-api-access-94l9h") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "kube-api-access-94l9h". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.734491 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/acee46d0-3d60-4d08-abbd-b3df00872f90-hosts-file\") pod \"node-resolver-znhzw\" (UID: \"acee46d0-3d60-4d08-abbd-b3df00872f90\") " pod="openshift-dns/node-resolver-znhzw" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.734553 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/0541bb33-5d4a-4ef9-964c-884c727499f6-cni-binary-copy\") pod \"multus-additional-cni-plugins-d4lsz\" (UID: \"0541bb33-5d4a-4ef9-964c-884c727499f6\") " pod="openshift-multus/multus-additional-cni-plugins-d4lsz" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.734749 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2" (OuterVolumeSpecName: "kube-api-access-ks6v2") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "kube-api-access-ks6v2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.735039 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.735444 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/0541bb33-5d4a-4ef9-964c-884c727499f6-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-d4lsz\" (UID: \"0541bb33-5d4a-4ef9-964c-884c727499f6\") " pod="openshift-multus/multus-additional-cni-plugins-d4lsz" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.736331 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ddc3c284-5d85-4e40-b285-f16062ad8d9c-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-kzdr6\" (UID: \"ddc3c284-5d85-4e40-b285-f16062ad8d9c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kzdr6" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.736475 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/0541bb33-5d4a-4ef9-964c-884c727499f6-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-d4lsz\" (UID: \"0541bb33-5d4a-4ef9-964c-884c727499f6\") " pod="openshift-multus/multus-additional-cni-plugins-d4lsz" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.736801 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/acee46d0-3d60-4d08-abbd-b3df00872f90-tmp-dir\") pod \"node-resolver-znhzw\" (UID: \"acee46d0-3d60-4d08-abbd-b3df00872f90\") " pod="openshift-dns/node-resolver-znhzw" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.737186 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.737229 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd" (OuterVolumeSpecName: "kube-api-access-8pskd") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "kube-api-access-8pskd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.737365 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.737367 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.737603 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.737704 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7" (OuterVolumeSpecName: "kube-api-access-tknt7") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "kube-api-access-tknt7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.737768 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.738243 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/7c0390f5-26b4-4299-958c-acac058be619-multus-daemon-config\") pod \"multus-qcqwq\" (UID: \"7c0390f5-26b4-4299-958c-acac058be619\") " pod="openshift-multus/multus-qcqwq" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.738313 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ddc3c284-5d85-4e40-b285-f16062ad8d9c-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-kzdr6\" (UID: \"ddc3c284-5d85-4e40-b285-f16062ad8d9c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kzdr6" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.739122 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.739183 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.739195 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.739214 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.739226 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:33Z","lastTransitionTime":"2026-01-21T00:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.739411 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.739871 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h8fs2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91e46657-55ca-43e7-9a43-6bb875c7debf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfh6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfh6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfh6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfh6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfh6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfh6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfh6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfh6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfh6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:10:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h8fs2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.743909 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfh6k\" (UniqueName: \"kubernetes.io/projected/91e46657-55ca-43e7-9a43-6bb875c7debf-kube-api-access-bfh6k\") pod \"ovnkube-node-h8fs2\" (UID: \"91e46657-55ca-43e7-9a43-6bb875c7debf\") " pod="openshift-ovn-kubernetes/ovnkube-node-h8fs2" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.745358 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config" (OuterVolumeSpecName: "config") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.745555 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.746210 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca" (OuterVolumeSpecName: "serviceca") pod "5ebfebf6-3ecd-458e-943f-bb25b52e2718" (UID: "5ebfebf6-3ecd-458e-943f-bb25b52e2718"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.746533 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.746711 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj" (OuterVolumeSpecName: "kube-api-access-mfzkj") pod "0effdbcf-dd7d-404d-9d48-77536d665a5d" (UID: "0effdbcf-dd7d-404d-9d48-77536d665a5d"). InnerVolumeSpecName "kube-api-access-mfzkj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.746998 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl" (OuterVolumeSpecName: "kube-api-access-twvbl") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "kube-api-access-twvbl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.747310 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config" (OuterVolumeSpecName: "config") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.747761 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.748512 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzdws\" (UniqueName: \"kubernetes.io/projected/ddc3c284-5d85-4e40-b285-f16062ad8d9c-kube-api-access-fzdws\") pod \"ovnkube-control-plane-57b78d8988-kzdr6\" (UID: \"ddc3c284-5d85-4e40-b285-f16062ad8d9c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kzdr6" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.749118 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5t5k\" (UniqueName: \"kubernetes.io/projected/7c0390f5-26b4-4299-958c-acac058be619-kube-api-access-j5t5k\") pod \"multus-qcqwq\" (UID: \"7c0390f5-26b4-4299-958c-acac058be619\") " pod="openshift-multus/multus-qcqwq" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.749260 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.749429 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9sftt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50c45c57-9291-48d3-8022-00a314541104\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5fh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:10:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9sftt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.749618 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp" (OuterVolumeSpecName: "tmp") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.750021 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key" (OuterVolumeSpecName: "signing-key") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.750049 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.750080 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.750382 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd" (OuterVolumeSpecName: "kube-api-access-mjwtd") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "kube-api-access-mjwtd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.750444 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "a208c9c2-333b-4b4a-be0d-bc32ec38a821" (UID: "a208c9c2-333b-4b4a-be0d-bc32ec38a821"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.750553 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.750593 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.749992 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl" (OuterVolumeSpecName: "kube-api-access-26xrl") pod "a208c9c2-333b-4b4a-be0d-bc32ec38a821" (UID: "a208c9c2-333b-4b4a-be0d-bc32ec38a821"). InnerVolumeSpecName "kube-api-access-26xrl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.750729 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t" (OuterVolumeSpecName: "kube-api-access-zth6t") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "kube-api-access-zth6t". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.750820 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.751057 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.751415 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities" (OuterVolumeSpecName: "utilities") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.751610 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjsv4\" (UniqueName: \"kubernetes.io/projected/21105fbf-0225-4ba6-ba90-17808d5250c6-kube-api-access-fjsv4\") pod \"network-metrics-daemon-9hvtf\" (UID: \"21105fbf-0225-4ba6-ba90-17808d5250c6\") " pod="openshift-multus/network-metrics-daemon-9hvtf" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.752188 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmjvj\" (UniqueName: \"kubernetes.io/projected/0541bb33-5d4a-4ef9-964c-884c727499f6-kube-api-access-qmjvj\") pod \"multus-additional-cni-plugins-d4lsz\" (UID: \"0541bb33-5d4a-4ef9-964c-884c727499f6\") " pod="openshift-multus/multus-additional-cni-plugins-d4lsz" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.753489 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-962nx\" (UniqueName: \"kubernetes.io/projected/acee46d0-3d60-4d08-abbd-b3df00872f90-kube-api-access-962nx\") pod \"node-resolver-znhzw\" (UID: \"acee46d0-3d60-4d08-abbd-b3df00872f90\") " pod="openshift-dns/node-resolver-znhzw" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.753529 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.753573 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.753851 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9" (OuterVolumeSpecName: "kube-api-access-ddlk9") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "kube-api-access-ddlk9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.753978 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.754128 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv" (OuterVolumeSpecName: "kube-api-access-pddnv") pod "e093be35-bb62-4843-b2e8-094545761610" (UID: "e093be35-bb62-4843-b2e8-094545761610"). InnerVolumeSpecName "kube-api-access-pddnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.754386 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf" (OuterVolumeSpecName: "kube-api-access-nmmzf") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "kube-api-access-nmmzf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.754502 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.754756 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "6ee8fbd3-1f81-4666-96da-5afc70819f1a" (UID: "6ee8fbd3-1f81-4666-96da-5afc70819f1a"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.754884 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq" (OuterVolumeSpecName: "kube-api-access-d4tqq") pod "6ee8fbd3-1f81-4666-96da-5afc70819f1a" (UID: "6ee8fbd3-1f81-4666-96da-5afc70819f1a"). InnerVolumeSpecName "kube-api-access-d4tqq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.754981 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.754981 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.754888 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities" (OuterVolumeSpecName: "utilities") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.755179 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.755476 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config" (OuterVolumeSpecName: "config") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.755596 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.755737 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.756301 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config" (OuterVolumeSpecName: "config") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.758070 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.758391 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca" (OuterVolumeSpecName: "client-ca") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.758590 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.758647 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.758753 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp" (OuterVolumeSpecName: "tmp") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.758632 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf" (OuterVolumeSpecName: "kube-api-access-6dmhf") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "kube-api-access-6dmhf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.759043 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.759095 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.759420 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca" (OuterVolumeSpecName: "service-ca") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.759512 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config" (OuterVolumeSpecName: "config") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.759538 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.760695 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.761195 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.761278 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.761832 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc" (OuterVolumeSpecName: "kube-api-access-zg8nc") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "kube-api-access-zg8nc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.762024 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.762069 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr" (OuterVolumeSpecName: "kube-api-access-6g4lr") pod "f7e2c886-118e-43bb-bef1-c78134de392b" (UID: "f7e2c886-118e-43bb-bef1-c78134de392b"). InnerVolumeSpecName "kube-api-access-6g4lr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.762500 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b" (OuterVolumeSpecName: "kube-api-access-zsb9b") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "kube-api-access-zsb9b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.762538 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv" (OuterVolumeSpecName: "kube-api-access-6rmnv") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "kube-api-access-6rmnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.762699 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk" (OuterVolumeSpecName: "kube-api-access-w94wk") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "kube-api-access-w94wk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.762763 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities" (OuterVolumeSpecName: "utilities") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.762770 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp" (OuterVolumeSpecName: "kube-api-access-8nspp") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "kube-api-access-8nspp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.762974 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs" (OuterVolumeSpecName: "kube-api-access-l87hs") pod "5ebfebf6-3ecd-458e-943f-bb25b52e2718" (UID: "5ebfebf6-3ecd-458e-943f-bb25b52e2718"). InnerVolumeSpecName "kube-api-access-l87hs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.763056 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "0dd0fbac-8c0d-4228-8faa-abbeedabf7db" (UID: "0dd0fbac-8c0d-4228-8faa-abbeedabf7db"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.763062 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.763232 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96ad38d8-0631-494b-8a0c-73936655173c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://7da13cdac196a74d6f3d3fe06fd8b8f1b93152d831e98ee1b66f4bd30f77756b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://f71ef5ad6b3cecbe91cf0fa1e4e8759ddda878222a1c71e9801313336e424458\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://531e890ac624829dfeab5674374a20bf8f80e96fe3ad6baff6532501d078f297\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://93e48b61d0a2e616f65259ffbca42d9d000600a9f57c456e9fafc249cbbfa187\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:09:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.763352 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" (UID: "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.763396 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.763752 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw" (OuterVolumeSpecName: "kube-api-access-9z4sw") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "kube-api-access-9z4sw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.763883 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz" (OuterVolumeSpecName: "kube-api-access-7jjkz") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "kube-api-access-7jjkz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.764043 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq" (OuterVolumeSpecName: "kube-api-access-m26jq") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "kube-api-access-m26jq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.764138 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities" (OuterVolumeSpecName: "utilities") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.764147 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.764279 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.764402 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit" (OuterVolumeSpecName: "audit") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.764905 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.764975 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk" (OuterVolumeSpecName: "kube-api-access-qqbfk") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "kube-api-access-qqbfk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.765595 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-h8fs2" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.766807 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.766862 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.767055 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert" (OuterVolumeSpecName: "cert") pod "a52afe44-fb37-46ed-a1f8-bf39727a3cbe" (UID: "a52afe44-fb37-46ed-a1f8-bf39727a3cbe"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.767239 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.767318 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.767411 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.767972 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv" (OuterVolumeSpecName: "kube-api-access-dztfv") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "kube-api-access-dztfv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.768341 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w" (OuterVolumeSpecName: "kube-api-access-rzt4w") pod "a52afe44-fb37-46ed-a1f8-bf39727a3cbe" (UID: "a52afe44-fb37-46ed-a1f8-bf39727a3cbe"). InnerVolumeSpecName "kube-api-access-rzt4w". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.768411 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp" (OuterVolumeSpecName: "tmp") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.768615 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config" (OuterVolumeSpecName: "config") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.768875 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f7e2c886-118e-43bb-bef1-c78134de392b" (UID: "f7e2c886-118e-43bb-bef1-c78134de392b"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.768928 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.768956 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6" (OuterVolumeSpecName: "kube-api-access-tkdh6") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "kube-api-access-tkdh6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.769026 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.769423 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps" (OuterVolumeSpecName: "kube-api-access-d7cps") pod "af41de71-79cf-4590-bbe9-9e8b848862cb" (UID: "af41de71-79cf-4590-bbe9-9e8b848862cb"). InnerVolumeSpecName "kube-api-access-d7cps". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.769839 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.770001 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.770040 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp" (OuterVolumeSpecName: "tmp") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.770373 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.770499 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l" (OuterVolumeSpecName: "kube-api-access-sbc2l") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "kube-api-access-sbc2l". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.770708 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts" (OuterVolumeSpecName: "kube-api-access-4g8ts") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "kube-api-access-4g8ts". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.770699 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs" (OuterVolumeSpecName: "certs") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.770691 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config" (OuterVolumeSpecName: "config") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.770768 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg" (OuterVolumeSpecName: "kube-api-access-wbmqg") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "kube-api-access-wbmqg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.770875 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.771225 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b" (OuterVolumeSpecName: "kube-api-access-pgx6b") pod "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" (UID: "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4"). InnerVolumeSpecName "kube-api-access-pgx6b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.771788 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.772209 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.772281 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp" (OuterVolumeSpecName: "tmp") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.772337 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.772508 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.772599 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config" (OuterVolumeSpecName: "config") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.772641 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c" (OuterVolumeSpecName: "kube-api-access-8nb9c") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "kube-api-access-8nb9c". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.772689 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.772997 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp" (OuterVolumeSpecName: "tmp") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.773293 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.773850 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.774092 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.774264 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config" (OuterVolumeSpecName: "config") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.774597 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem" (OuterVolumeSpecName: "ca-trust-extracted-pem") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "ca-trust-extracted-pem". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.774646 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea90c3b6-90f2-4468-8987-cbc4691535cc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://01a6d01cbabb92bffcca05eb808b4bd0bee991f66f129422707d982e4e3d320f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://37fdbdbec8b545e1b3921af5413cad07f8ffa20745589533bc0fffa6ec9a42fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://0f3214d25bbdd49a8a29ce6f30a600024d862102e53bee5c64ac3f0880d97481\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d4b43bf3081fdf5d3820618be5c2c1f9db8f1b4b344beee6c6fb8f37064a0308\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4b43bf3081fdf5d3820618be5c2c1f9db8f1b4b344beee6c6fb8f37064a0308\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:09:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:09:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:09:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.775217 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.784955 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.789128 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.792686 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.799365 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.804828 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.829131 5118 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.829374 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wbmqg\" (UniqueName: \"kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.829447 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nspp\" (UniqueName: \"kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.829519 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-94l9h\" (UniqueName: \"kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.829643 5118 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.829724 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ks6v2\" (UniqueName: \"kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.829796 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.829875 5118 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.829942 5118 reconciler_common.go:299] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.830018 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m26jq\" (UniqueName: \"kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.830083 5118 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.830170 5118 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.830261 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pgx6b\" (UniqueName: \"kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.830344 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9z4sw\" (UniqueName: \"kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.830425 5118 reconciler_common.go:299] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.830507 5118 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.830582 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pddnv\" (UniqueName: \"kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.830657 5118 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.830735 5118 reconciler_common.go:299] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.830808 5118 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.830883 5118 reconciler_common.go:299] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.830962 5118 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.831036 5118 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.831106 5118 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.831193 5118 reconciler_common.go:299] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.831279 5118 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.831440 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l87hs\" (UniqueName: \"kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.831512 5118 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.831575 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nb9c\" (UniqueName: \"kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.831644 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tkdh6\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.831706 5118 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.831768 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w94wk\" (UniqueName: \"kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.831841 5118 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.831914 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6dmhf\" (UniqueName: \"kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.831977 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d7cps\" (UniqueName: \"kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.832058 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4g8ts\" (UniqueName: \"kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.832122 5118 reconciler_common.go:299] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.832324 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-26xrl\" (UniqueName: \"kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.832389 5118 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.832460 5118 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.839547 5118 reconciler_common.go:299] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.839705 5118 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.839799 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tknt7\" (UniqueName: \"kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.839885 5118 reconciler_common.go:299] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.839961 5118 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.840030 5118 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.840102 5118 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: E0121 00:10:33.832962 5118 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 21 00:10:33 crc kubenswrapper[5118]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Jan 21 00:10:33 crc kubenswrapper[5118]: apiVersion: v1 Jan 21 00:10:33 crc kubenswrapper[5118]: clusters: Jan 21 00:10:33 crc kubenswrapper[5118]: - cluster: Jan 21 00:10:33 crc kubenswrapper[5118]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Jan 21 00:10:33 crc kubenswrapper[5118]: server: https://api-int.crc.testing:6443 Jan 21 00:10:33 crc kubenswrapper[5118]: name: default-cluster Jan 21 00:10:33 crc kubenswrapper[5118]: contexts: Jan 21 00:10:33 crc kubenswrapper[5118]: - context: Jan 21 00:10:33 crc kubenswrapper[5118]: cluster: default-cluster Jan 21 00:10:33 crc kubenswrapper[5118]: namespace: default Jan 21 00:10:33 crc kubenswrapper[5118]: user: default-auth Jan 21 00:10:33 crc kubenswrapper[5118]: name: default-context Jan 21 00:10:33 crc kubenswrapper[5118]: current-context: default-context Jan 21 00:10:33 crc kubenswrapper[5118]: kind: Config Jan 21 00:10:33 crc kubenswrapper[5118]: preferences: {} Jan 21 00:10:33 crc kubenswrapper[5118]: users: Jan 21 00:10:33 crc kubenswrapper[5118]: - name: default-auth Jan 21 00:10:33 crc kubenswrapper[5118]: user: Jan 21 00:10:33 crc kubenswrapper[5118]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Jan 21 00:10:33 crc kubenswrapper[5118]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Jan 21 00:10:33 crc kubenswrapper[5118]: EOF Jan 21 00:10:33 crc kubenswrapper[5118]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bfh6k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-h8fs2_openshift-ovn-kubernetes(91e46657-55ca-43e7-9a43-6bb875c7debf): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 21 00:10:33 crc kubenswrapper[5118]: > logger="UnhandledError" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.840203 5118 reconciler_common.go:299] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.841860 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qqbfk\" (UniqueName: \"kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.841934 5118 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.842006 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mjwtd\" (UniqueName: \"kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.842082 5118 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.842147 5118 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.842322 5118 reconciler_common.go:299] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.842479 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5lcfw\" (UniqueName: \"kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: E0121 00:10:33.843533 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-h8fs2" podUID="91e46657-55ca-43e7-9a43-6bb875c7debf" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.844076 5118 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.844113 5118 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.844126 5118 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.844200 5118 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.844219 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mfzkj\" (UniqueName: \"kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.844232 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.844242 5118 reconciler_common.go:299] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.844253 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-grwfz\" (UniqueName: \"kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.844263 5118 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.844274 5118 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.844284 5118 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.844295 5118 reconciler_common.go:299] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.844305 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xxfcv\" (UniqueName: \"kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.844316 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8pskd\" (UniqueName: \"kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.844326 5118 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.844336 5118 reconciler_common.go:299] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.844346 5118 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.844357 5118 reconciler_common.go:299] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.844367 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7jjkz\" (UniqueName: \"kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.844377 5118 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.844387 5118 reconciler_common.go:299] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.844398 5118 reconciler_common.go:299] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.844409 5118 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.844421 5118 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.844433 5118 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.844443 5118 reconciler_common.go:299] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.844453 5118 reconciler_common.go:299] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.844463 5118 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.844474 5118 reconciler_common.go:299] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.844486 5118 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.844497 5118 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.844508 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.844520 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pllx6\" (UniqueName: \"kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.844531 5118 reconciler_common.go:299] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.844541 5118 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.844553 5118 reconciler_common.go:299] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.844564 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ftwb6\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.844575 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ddlk9\" (UniqueName: \"kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.844586 5118 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.844598 5118 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.844608 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6rmnv\" (UniqueName: \"kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.844620 5118 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.844631 5118 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.844642 5118 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.844654 5118 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.844665 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zsb9b\" (UniqueName: \"kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.844677 5118 reconciler_common.go:299] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.844689 5118 reconciler_common.go:299] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.844699 5118 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.844711 5118 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.844722 5118 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.844733 5118 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.844745 5118 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.844757 5118 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.844767 5118 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.844777 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dztfv\" (UniqueName: \"kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.844787 5118 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.844797 5118 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.844807 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-twvbl\" (UniqueName: \"kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.844817 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sbc2l\" (UniqueName: \"kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.844827 5118 reconciler_common.go:299] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.844837 5118 reconciler_common.go:299] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.844848 5118 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.844858 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zth6t\" (UniqueName: \"kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.844870 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d4tqq\" (UniqueName: \"kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.844880 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rzt4w\" (UniqueName: \"kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.844891 5118 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.844900 5118 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.844910 5118 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.844919 5118 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.844929 5118 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.844940 5118 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.844950 5118 reconciler_common.go:299] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.844961 5118 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.844972 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nmmzf\" (UniqueName: \"kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.844983 5118 reconciler_common.go:299] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.844993 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zg8nc\" (UniqueName: \"kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.845003 5118 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.845013 5118 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.845023 5118 reconciler_common.go:299] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.845033 5118 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.845043 5118 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.845053 5118 reconciler_common.go:299] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.845063 5118 reconciler_common.go:299] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.845073 5118 reconciler_common.go:299] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.845084 5118 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.845094 5118 reconciler_common.go:299] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.845103 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.845113 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6g4lr\" (UniqueName: \"kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.845124 5118 reconciler_common.go:299] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.845912 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.846701 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.846734 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.846746 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.846761 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.846771 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:33Z","lastTransitionTime":"2026-01-21T00:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:33 crc kubenswrapper[5118]: E0121 00:10:33.847488 5118 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.20.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vxsjl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-22r9n_openshift-machine-config-operator(44eb9bc7-60a3-421c-bf5e-d1d9a5026435): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.850863 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 00:10:33 crc kubenswrapper[5118]: E0121 00:10:33.852028 5118 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vxsjl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-22r9n_openshift-machine-config-operator(44eb9bc7-60a3-421c-bf5e-d1d9a5026435): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 21 00:10:33 crc kubenswrapper[5118]: E0121 00:10:33.853261 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.858218 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-qcqwq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c0390f5-26b4-4299-958c-acac058be619\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5t5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:10:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qcqwq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.867126 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-9sftt" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.873707 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-d4lsz" Jan 21 00:10:33 crc kubenswrapper[5118]: W0121 00:10:33.875894 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod50c45c57_9291_48d3_8022_00a314541104.slice/crio-50c376fd21b4867e278b2c0525785667d3c57803336d707cd2343ab8165ef725 WatchSource:0}: Error finding container 50c376fd21b4867e278b2c0525785667d3c57803336d707cd2343ab8165ef725: Status 404 returned error can't find the container with id 50c376fd21b4867e278b2c0525785667d3c57803336d707cd2343ab8165ef725 Jan 21 00:10:33 crc kubenswrapper[5118]: E0121 00:10:33.878201 5118 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 21 00:10:33 crc kubenswrapper[5118]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Jan 21 00:10:33 crc kubenswrapper[5118]: while [ true ]; Jan 21 00:10:33 crc kubenswrapper[5118]: do Jan 21 00:10:33 crc kubenswrapper[5118]: for f in $(ls /tmp/serviceca); do Jan 21 00:10:33 crc kubenswrapper[5118]: echo $f Jan 21 00:10:33 crc kubenswrapper[5118]: ca_file_path="/tmp/serviceca/${f}" Jan 21 00:10:33 crc kubenswrapper[5118]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Jan 21 00:10:33 crc kubenswrapper[5118]: reg_dir_path="/etc/docker/certs.d/${f}" Jan 21 00:10:33 crc kubenswrapper[5118]: if [ -e "${reg_dir_path}" ]; then Jan 21 00:10:33 crc kubenswrapper[5118]: cp -u $ca_file_path $reg_dir_path/ca.crt Jan 21 00:10:33 crc kubenswrapper[5118]: else Jan 21 00:10:33 crc kubenswrapper[5118]: mkdir $reg_dir_path Jan 21 00:10:33 crc kubenswrapper[5118]: cp $ca_file_path $reg_dir_path/ca.crt Jan 21 00:10:33 crc kubenswrapper[5118]: fi Jan 21 00:10:33 crc kubenswrapper[5118]: done Jan 21 00:10:33 crc kubenswrapper[5118]: for d in $(ls /etc/docker/certs.d); do Jan 21 00:10:33 crc kubenswrapper[5118]: echo $d Jan 21 00:10:33 crc kubenswrapper[5118]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Jan 21 00:10:33 crc kubenswrapper[5118]: reg_conf_path="/tmp/serviceca/${dp}" Jan 21 00:10:33 crc kubenswrapper[5118]: if [ ! -e "${reg_conf_path}" ]; then Jan 21 00:10:33 crc kubenswrapper[5118]: rm -rf /etc/docker/certs.d/$d Jan 21 00:10:33 crc kubenswrapper[5118]: fi Jan 21 00:10:33 crc kubenswrapper[5118]: done Jan 21 00:10:33 crc kubenswrapper[5118]: sleep 60 & wait ${!} Jan 21 00:10:33 crc kubenswrapper[5118]: done Jan 21 00:10:33 crc kubenswrapper[5118]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j5fh5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-9sftt_openshift-image-registry(50c45c57-9291-48d3-8022-00a314541104): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 21 00:10:33 crc kubenswrapper[5118]: > logger="UnhandledError" Jan 21 00:10:33 crc kubenswrapper[5118]: E0121 00:10:33.879365 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-9sftt" podUID="50c45c57-9291-48d3-8022-00a314541104" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.879442 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kzdr6" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.879597 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8e2c6b8-bbac-4c8c-98aa-eed95855d358\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://9d9f0111a3537cc924a7e201bcd1e6a41bc82e79b86ec8f1d33560c518239fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://9c43bace9e1ec4b78fc3886b886cfc9eb9505e5cd415b54a393092a5fb6bfede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://efa0534fa57e4334809de905bc9c6076a74ca99b2829d2716055befea0eb99ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://898860c9529a12085df4c5531acb1bd4f2bf2dc8acc40c795bef9e642ab80c73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://cde9ebfec14b67069eee7df51b0b8e257d4b7ccb5fc744f7cf08722b62167f08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4d69bc807ea39c1b6e472b26c0ed0618f1da285bdc9fd839f01a7c311aa34ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d69bc807ea39c1b6e472b26c0ed0618f1da285bdc9fd839f01a7c311aa34ef4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:09:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:09:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://afa5b4348d36675413cb2b023cbf9c78e23c0ee25d1298fecd6ef727c36d64ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa5b4348d36675413cb2b023cbf9c78e23c0ee25d1298fecd6ef727c36d64ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:09:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:09:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://b755b446899236ed8c24e77229b7d2ca8cec62643e7c60e7d77fd8a26bdea1ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b755b446899236ed8c24e77229b7d2ca8cec62643e7c60e7d77fd8a26bdea1ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:09:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:09:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:09:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:33 crc kubenswrapper[5118]: W0121 00:10:33.883770 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0541bb33_5d4a_4ef9_964c_884c727499f6.slice/crio-01a48b4f59bd74f65bd0cf238892d11c439cdcc8adbf67126d810bc22850aa23 WatchSource:0}: Error finding container 01a48b4f59bd74f65bd0cf238892d11c439cdcc8adbf67126d810bc22850aa23: Status 404 returned error can't find the container with id 01a48b4f59bd74f65bd0cf238892d11c439cdcc8adbf67126d810bc22850aa23 Jan 21 00:10:33 crc kubenswrapper[5118]: E0121 00:10:33.886635 5118 kuberuntime_manager.go:1358] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qmjvj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-d4lsz_openshift-multus(0541bb33-5d4a-4ef9-964c-884c727499f6): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.888307 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-qcqwq" Jan 21 00:10:33 crc kubenswrapper[5118]: E0121 00:10:33.888635 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-d4lsz" podUID="0541bb33-5d4a-4ef9-964c-884c727499f6" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.889708 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44eb9bc7-60a3-421c-bf5e-d1d9a5026435\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxsjl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxsjl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:10:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-22r9n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:33 crc kubenswrapper[5118]: E0121 00:10:33.893935 5118 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 21 00:10:33 crc kubenswrapper[5118]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[/bin/bash -c #!/bin/bash Jan 21 00:10:33 crc kubenswrapper[5118]: set -euo pipefail Jan 21 00:10:33 crc kubenswrapper[5118]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Jan 21 00:10:33 crc kubenswrapper[5118]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Jan 21 00:10:33 crc kubenswrapper[5118]: # As the secret mount is optional we must wait for the files to be present. Jan 21 00:10:33 crc kubenswrapper[5118]: # The service is created in monitor.yaml and this is created in sdn.yaml. Jan 21 00:10:33 crc kubenswrapper[5118]: TS=$(date +%s) Jan 21 00:10:33 crc kubenswrapper[5118]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Jan 21 00:10:33 crc kubenswrapper[5118]: HAS_LOGGED_INFO=0 Jan 21 00:10:33 crc kubenswrapper[5118]: Jan 21 00:10:33 crc kubenswrapper[5118]: log_missing_certs(){ Jan 21 00:10:33 crc kubenswrapper[5118]: CUR_TS=$(date +%s) Jan 21 00:10:33 crc kubenswrapper[5118]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Jan 21 00:10:33 crc kubenswrapper[5118]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Jan 21 00:10:33 crc kubenswrapper[5118]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Jan 21 00:10:33 crc kubenswrapper[5118]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Jan 21 00:10:33 crc kubenswrapper[5118]: HAS_LOGGED_INFO=1 Jan 21 00:10:33 crc kubenswrapper[5118]: fi Jan 21 00:10:33 crc kubenswrapper[5118]: } Jan 21 00:10:33 crc kubenswrapper[5118]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Jan 21 00:10:33 crc kubenswrapper[5118]: log_missing_certs Jan 21 00:10:33 crc kubenswrapper[5118]: sleep 5 Jan 21 00:10:33 crc kubenswrapper[5118]: done Jan 21 00:10:33 crc kubenswrapper[5118]: Jan 21 00:10:33 crc kubenswrapper[5118]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Jan 21 00:10:33 crc kubenswrapper[5118]: exec /usr/bin/kube-rbac-proxy \ Jan 21 00:10:33 crc kubenswrapper[5118]: --logtostderr \ Jan 21 00:10:33 crc kubenswrapper[5118]: --secure-listen-address=:9108 \ Jan 21 00:10:33 crc kubenswrapper[5118]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Jan 21 00:10:33 crc kubenswrapper[5118]: --upstream=http://127.0.0.1:29108/ \ Jan 21 00:10:33 crc kubenswrapper[5118]: --tls-private-key-file=${TLS_PK} \ Jan 21 00:10:33 crc kubenswrapper[5118]: --tls-cert-file=${TLS_CERT} Jan 21 00:10:33 crc kubenswrapper[5118]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fzdws,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-kzdr6_openshift-ovn-kubernetes(ddc3c284-5d85-4e40-b285-f16062ad8d9c): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 21 00:10:33 crc kubenswrapper[5118]: > logger="UnhandledError" Jan 21 00:10:33 crc kubenswrapper[5118]: E0121 00:10:33.896047 5118 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 21 00:10:33 crc kubenswrapper[5118]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Jan 21 00:10:33 crc kubenswrapper[5118]: if [[ -f "/env/_master" ]]; then Jan 21 00:10:33 crc kubenswrapper[5118]: set -o allexport Jan 21 00:10:33 crc kubenswrapper[5118]: source "/env/_master" Jan 21 00:10:33 crc kubenswrapper[5118]: set +o allexport Jan 21 00:10:33 crc kubenswrapper[5118]: fi Jan 21 00:10:33 crc kubenswrapper[5118]: Jan 21 00:10:33 crc kubenswrapper[5118]: ovn_v4_join_subnet_opt= Jan 21 00:10:33 crc kubenswrapper[5118]: if [[ "" != "" ]]; then Jan 21 00:10:33 crc kubenswrapper[5118]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Jan 21 00:10:33 crc kubenswrapper[5118]: fi Jan 21 00:10:33 crc kubenswrapper[5118]: ovn_v6_join_subnet_opt= Jan 21 00:10:33 crc kubenswrapper[5118]: if [[ "" != "" ]]; then Jan 21 00:10:33 crc kubenswrapper[5118]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Jan 21 00:10:33 crc kubenswrapper[5118]: fi Jan 21 00:10:33 crc kubenswrapper[5118]: Jan 21 00:10:33 crc kubenswrapper[5118]: ovn_v4_transit_switch_subnet_opt= Jan 21 00:10:33 crc kubenswrapper[5118]: if [[ "" != "" ]]; then Jan 21 00:10:33 crc kubenswrapper[5118]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Jan 21 00:10:33 crc kubenswrapper[5118]: fi Jan 21 00:10:33 crc kubenswrapper[5118]: ovn_v6_transit_switch_subnet_opt= Jan 21 00:10:33 crc kubenswrapper[5118]: if [[ "" != "" ]]; then Jan 21 00:10:33 crc kubenswrapper[5118]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Jan 21 00:10:33 crc kubenswrapper[5118]: fi Jan 21 00:10:33 crc kubenswrapper[5118]: Jan 21 00:10:33 crc kubenswrapper[5118]: dns_name_resolver_enabled_flag= Jan 21 00:10:33 crc kubenswrapper[5118]: if [[ "false" == "true" ]]; then Jan 21 00:10:33 crc kubenswrapper[5118]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Jan 21 00:10:33 crc kubenswrapper[5118]: fi Jan 21 00:10:33 crc kubenswrapper[5118]: Jan 21 00:10:33 crc kubenswrapper[5118]: persistent_ips_enabled_flag="--enable-persistent-ips" Jan 21 00:10:33 crc kubenswrapper[5118]: Jan 21 00:10:33 crc kubenswrapper[5118]: # This is needed so that converting clusters from GA to TP Jan 21 00:10:33 crc kubenswrapper[5118]: # will rollout control plane pods as well Jan 21 00:10:33 crc kubenswrapper[5118]: network_segmentation_enabled_flag= Jan 21 00:10:33 crc kubenswrapper[5118]: multi_network_enabled_flag= Jan 21 00:10:33 crc kubenswrapper[5118]: if [[ "true" == "true" ]]; then Jan 21 00:10:33 crc kubenswrapper[5118]: multi_network_enabled_flag="--enable-multi-network" Jan 21 00:10:33 crc kubenswrapper[5118]: fi Jan 21 00:10:33 crc kubenswrapper[5118]: if [[ "true" == "true" ]]; then Jan 21 00:10:33 crc kubenswrapper[5118]: if [[ "true" != "true" ]]; then Jan 21 00:10:33 crc kubenswrapper[5118]: multi_network_enabled_flag="--enable-multi-network" Jan 21 00:10:33 crc kubenswrapper[5118]: fi Jan 21 00:10:33 crc kubenswrapper[5118]: network_segmentation_enabled_flag="--enable-network-segmentation" Jan 21 00:10:33 crc kubenswrapper[5118]: fi Jan 21 00:10:33 crc kubenswrapper[5118]: Jan 21 00:10:33 crc kubenswrapper[5118]: route_advertisements_enable_flag= Jan 21 00:10:33 crc kubenswrapper[5118]: if [[ "false" == "true" ]]; then Jan 21 00:10:33 crc kubenswrapper[5118]: route_advertisements_enable_flag="--enable-route-advertisements" Jan 21 00:10:33 crc kubenswrapper[5118]: fi Jan 21 00:10:33 crc kubenswrapper[5118]: Jan 21 00:10:33 crc kubenswrapper[5118]: preconfigured_udn_addresses_enable_flag= Jan 21 00:10:33 crc kubenswrapper[5118]: if [[ "false" == "true" ]]; then Jan 21 00:10:33 crc kubenswrapper[5118]: preconfigured_udn_addresses_enable_flag="--enable-preconfigured-udn-addresses" Jan 21 00:10:33 crc kubenswrapper[5118]: fi Jan 21 00:10:33 crc kubenswrapper[5118]: Jan 21 00:10:33 crc kubenswrapper[5118]: # Enable multi-network policy if configured (control-plane always full mode) Jan 21 00:10:33 crc kubenswrapper[5118]: multi_network_policy_enabled_flag= Jan 21 00:10:33 crc kubenswrapper[5118]: if [[ "false" == "true" ]]; then Jan 21 00:10:33 crc kubenswrapper[5118]: multi_network_policy_enabled_flag="--enable-multi-networkpolicy" Jan 21 00:10:33 crc kubenswrapper[5118]: fi Jan 21 00:10:33 crc kubenswrapper[5118]: Jan 21 00:10:33 crc kubenswrapper[5118]: # Enable admin network policy if configured (control-plane always full mode) Jan 21 00:10:33 crc kubenswrapper[5118]: admin_network_policy_enabled_flag= Jan 21 00:10:33 crc kubenswrapper[5118]: if [[ "true" == "true" ]]; then Jan 21 00:10:33 crc kubenswrapper[5118]: admin_network_policy_enabled_flag="--enable-admin-network-policy" Jan 21 00:10:33 crc kubenswrapper[5118]: fi Jan 21 00:10:33 crc kubenswrapper[5118]: Jan 21 00:10:33 crc kubenswrapper[5118]: if [ "shared" == "shared" ]; then Jan 21 00:10:33 crc kubenswrapper[5118]: gateway_mode_flags="--gateway-mode shared" Jan 21 00:10:33 crc kubenswrapper[5118]: elif [ "shared" == "local" ]; then Jan 21 00:10:33 crc kubenswrapper[5118]: gateway_mode_flags="--gateway-mode local" Jan 21 00:10:33 crc kubenswrapper[5118]: else Jan 21 00:10:33 crc kubenswrapper[5118]: echo "Invalid OVN_GATEWAY_MODE: \"shared\". Must be \"local\" or \"shared\"." Jan 21 00:10:33 crc kubenswrapper[5118]: exit 1 Jan 21 00:10:33 crc kubenswrapper[5118]: fi Jan 21 00:10:33 crc kubenswrapper[5118]: Jan 21 00:10:33 crc kubenswrapper[5118]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Jan 21 00:10:33 crc kubenswrapper[5118]: exec /usr/bin/ovnkube \ Jan 21 00:10:33 crc kubenswrapper[5118]: --enable-interconnect \ Jan 21 00:10:33 crc kubenswrapper[5118]: --init-cluster-manager "${K8S_NODE}" \ Jan 21 00:10:33 crc kubenswrapper[5118]: --config-file=/run/ovnkube-config/ovnkube.conf \ Jan 21 00:10:33 crc kubenswrapper[5118]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Jan 21 00:10:33 crc kubenswrapper[5118]: --metrics-bind-address "127.0.0.1:29108" \ Jan 21 00:10:33 crc kubenswrapper[5118]: --metrics-enable-pprof \ Jan 21 00:10:33 crc kubenswrapper[5118]: --metrics-enable-config-duration \ Jan 21 00:10:33 crc kubenswrapper[5118]: ${ovn_v4_join_subnet_opt} \ Jan 21 00:10:33 crc kubenswrapper[5118]: ${ovn_v6_join_subnet_opt} \ Jan 21 00:10:33 crc kubenswrapper[5118]: ${ovn_v4_transit_switch_subnet_opt} \ Jan 21 00:10:33 crc kubenswrapper[5118]: ${ovn_v6_transit_switch_subnet_opt} \ Jan 21 00:10:33 crc kubenswrapper[5118]: ${dns_name_resolver_enabled_flag} \ Jan 21 00:10:33 crc kubenswrapper[5118]: ${persistent_ips_enabled_flag} \ Jan 21 00:10:33 crc kubenswrapper[5118]: ${multi_network_enabled_flag} \ Jan 21 00:10:33 crc kubenswrapper[5118]: ${network_segmentation_enabled_flag} \ Jan 21 00:10:33 crc kubenswrapper[5118]: ${gateway_mode_flags} \ Jan 21 00:10:33 crc kubenswrapper[5118]: ${route_advertisements_enable_flag} \ Jan 21 00:10:33 crc kubenswrapper[5118]: ${preconfigured_udn_addresses_enable_flag} \ Jan 21 00:10:33 crc kubenswrapper[5118]: --enable-egress-ip=true \ Jan 21 00:10:33 crc kubenswrapper[5118]: --enable-egress-firewall=true \ Jan 21 00:10:33 crc kubenswrapper[5118]: --enable-egress-qos=true \ Jan 21 00:10:33 crc kubenswrapper[5118]: --enable-egress-service=true \ Jan 21 00:10:33 crc kubenswrapper[5118]: --enable-multicast \ Jan 21 00:10:33 crc kubenswrapper[5118]: --enable-multi-external-gateway=true \ Jan 21 00:10:33 crc kubenswrapper[5118]: ${multi_network_policy_enabled_flag} \ Jan 21 00:10:33 crc kubenswrapper[5118]: ${admin_network_policy_enabled_flag} Jan 21 00:10:33 crc kubenswrapper[5118]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fzdws,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-kzdr6_openshift-ovn-kubernetes(ddc3c284-5d85-4e40-b285-f16062ad8d9c): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 21 00:10:33 crc kubenswrapper[5118]: > logger="UnhandledError" Jan 21 00:10:33 crc kubenswrapper[5118]: E0121 00:10:33.897841 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kzdr6" podUID="ddc3c284-5d85-4e40-b285-f16062ad8d9c" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.898286 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9hvtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21105fbf-0225-4ba6-ba90-17808d5250c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjsv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjsv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:10:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9hvtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:33 crc kubenswrapper[5118]: W0121 00:10:33.898693 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7c0390f5_26b4_4299_958c_acac058be619.slice/crio-8dd2fbf760fad537990e407cbf51622178c978130ffcd13eab8a59b6701b54a2 WatchSource:0}: Error finding container 8dd2fbf760fad537990e407cbf51622178c978130ffcd13eab8a59b6701b54a2: Status 404 returned error can't find the container with id 8dd2fbf760fad537990e407cbf51622178c978130ffcd13eab8a59b6701b54a2 Jan 21 00:10:33 crc kubenswrapper[5118]: E0121 00:10:33.900678 5118 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 21 00:10:33 crc kubenswrapper[5118]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Jan 21 00:10:33 crc kubenswrapper[5118]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Jan 21 00:10:33 crc kubenswrapper[5118]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j5t5k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-qcqwq_openshift-multus(7c0390f5-26b4-4299-958c-acac058be619): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 21 00:10:33 crc kubenswrapper[5118]: > logger="UnhandledError" Jan 21 00:10:33 crc kubenswrapper[5118]: E0121 00:10:33.902074 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-qcqwq" podUID="7c0390f5-26b4-4299-958c-acac058be619" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.905836 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-znhzw" Jan 21 00:10:33 crc kubenswrapper[5118]: W0121 00:10:33.914843 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podacee46d0_3d60_4d08_abbd_b3df00872f90.slice/crio-658c39f3febb8b0c05495d3296679de80589411623b0a05e71bcd62745618e9a WatchSource:0}: Error finding container 658c39f3febb8b0c05495d3296679de80589411623b0a05e71bcd62745618e9a: Status 404 returned error can't find the container with id 658c39f3febb8b0c05495d3296679de80589411623b0a05e71bcd62745618e9a Jan 21 00:10:33 crc kubenswrapper[5118]: E0121 00:10:33.916877 5118 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 21 00:10:33 crc kubenswrapper[5118]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/bin/bash -c #!/bin/bash Jan 21 00:10:33 crc kubenswrapper[5118]: set -uo pipefail Jan 21 00:10:33 crc kubenswrapper[5118]: Jan 21 00:10:33 crc kubenswrapper[5118]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Jan 21 00:10:33 crc kubenswrapper[5118]: Jan 21 00:10:33 crc kubenswrapper[5118]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Jan 21 00:10:33 crc kubenswrapper[5118]: HOSTS_FILE="/etc/hosts" Jan 21 00:10:33 crc kubenswrapper[5118]: TEMP_FILE="/tmp/hosts.tmp" Jan 21 00:10:33 crc kubenswrapper[5118]: Jan 21 00:10:33 crc kubenswrapper[5118]: IFS=', ' read -r -a services <<< "${SERVICES}" Jan 21 00:10:33 crc kubenswrapper[5118]: Jan 21 00:10:33 crc kubenswrapper[5118]: # Make a temporary file with the old hosts file's attributes. Jan 21 00:10:33 crc kubenswrapper[5118]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Jan 21 00:10:33 crc kubenswrapper[5118]: echo "Failed to preserve hosts file. Exiting." Jan 21 00:10:33 crc kubenswrapper[5118]: exit 1 Jan 21 00:10:33 crc kubenswrapper[5118]: fi Jan 21 00:10:33 crc kubenswrapper[5118]: Jan 21 00:10:33 crc kubenswrapper[5118]: while true; do Jan 21 00:10:33 crc kubenswrapper[5118]: declare -A svc_ips Jan 21 00:10:33 crc kubenswrapper[5118]: for svc in "${services[@]}"; do Jan 21 00:10:33 crc kubenswrapper[5118]: # Fetch service IP from cluster dns if present. We make several tries Jan 21 00:10:33 crc kubenswrapper[5118]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Jan 21 00:10:33 crc kubenswrapper[5118]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Jan 21 00:10:33 crc kubenswrapper[5118]: # support UDP loadbalancers and require reaching DNS through TCP. Jan 21 00:10:33 crc kubenswrapper[5118]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 21 00:10:33 crc kubenswrapper[5118]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 21 00:10:33 crc kubenswrapper[5118]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 21 00:10:33 crc kubenswrapper[5118]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Jan 21 00:10:33 crc kubenswrapper[5118]: for i in ${!cmds[*]} Jan 21 00:10:33 crc kubenswrapper[5118]: do Jan 21 00:10:33 crc kubenswrapper[5118]: ips=($(eval "${cmds[i]}")) Jan 21 00:10:33 crc kubenswrapper[5118]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Jan 21 00:10:33 crc kubenswrapper[5118]: svc_ips["${svc}"]="${ips[@]}" Jan 21 00:10:33 crc kubenswrapper[5118]: break Jan 21 00:10:33 crc kubenswrapper[5118]: fi Jan 21 00:10:33 crc kubenswrapper[5118]: done Jan 21 00:10:33 crc kubenswrapper[5118]: done Jan 21 00:10:33 crc kubenswrapper[5118]: Jan 21 00:10:33 crc kubenswrapper[5118]: # Update /etc/hosts only if we get valid service IPs Jan 21 00:10:33 crc kubenswrapper[5118]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Jan 21 00:10:33 crc kubenswrapper[5118]: # Stale entries could exist in /etc/hosts if the service is deleted Jan 21 00:10:33 crc kubenswrapper[5118]: if [[ -n "${svc_ips[*]-}" ]]; then Jan 21 00:10:33 crc kubenswrapper[5118]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Jan 21 00:10:33 crc kubenswrapper[5118]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Jan 21 00:10:33 crc kubenswrapper[5118]: # Only continue rebuilding the hosts entries if its original content is preserved Jan 21 00:10:33 crc kubenswrapper[5118]: sleep 60 & wait Jan 21 00:10:33 crc kubenswrapper[5118]: continue Jan 21 00:10:33 crc kubenswrapper[5118]: fi Jan 21 00:10:33 crc kubenswrapper[5118]: Jan 21 00:10:33 crc kubenswrapper[5118]: # Append resolver entries for services Jan 21 00:10:33 crc kubenswrapper[5118]: rc=0 Jan 21 00:10:33 crc kubenswrapper[5118]: for svc in "${!svc_ips[@]}"; do Jan 21 00:10:33 crc kubenswrapper[5118]: for ip in ${svc_ips[${svc}]}; do Jan 21 00:10:33 crc kubenswrapper[5118]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Jan 21 00:10:33 crc kubenswrapper[5118]: done Jan 21 00:10:33 crc kubenswrapper[5118]: done Jan 21 00:10:33 crc kubenswrapper[5118]: if [[ $rc -ne 0 ]]; then Jan 21 00:10:33 crc kubenswrapper[5118]: sleep 60 & wait Jan 21 00:10:33 crc kubenswrapper[5118]: continue Jan 21 00:10:33 crc kubenswrapper[5118]: fi Jan 21 00:10:33 crc kubenswrapper[5118]: Jan 21 00:10:33 crc kubenswrapper[5118]: Jan 21 00:10:33 crc kubenswrapper[5118]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Jan 21 00:10:33 crc kubenswrapper[5118]: # Replace /etc/hosts with our modified version if needed Jan 21 00:10:33 crc kubenswrapper[5118]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Jan 21 00:10:33 crc kubenswrapper[5118]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Jan 21 00:10:33 crc kubenswrapper[5118]: fi Jan 21 00:10:33 crc kubenswrapper[5118]: sleep 60 & wait Jan 21 00:10:33 crc kubenswrapper[5118]: unset svc_ips Jan 21 00:10:33 crc kubenswrapper[5118]: done Jan 21 00:10:33 crc kubenswrapper[5118]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-962nx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-znhzw_openshift-dns(acee46d0-3d60-4d08-abbd-b3df00872f90): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 21 00:10:33 crc kubenswrapper[5118]: > logger="UnhandledError" Jan 21 00:10:33 crc kubenswrapper[5118]: E0121 00:10:33.918111 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-znhzw" podUID="acee46d0-3d60-4d08-abbd-b3df00872f90" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.946635 5118 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.948539 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.948594 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.948603 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.948616 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:33 crc kubenswrapper[5118]: I0121 00:10:33.948625 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:33Z","lastTransitionTime":"2026-01-21T00:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.050695 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.050778 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.050803 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.050834 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.050858 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:34Z","lastTransitionTime":"2026-01-21T00:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.153125 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.153187 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.153198 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.153213 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.153223 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:34Z","lastTransitionTime":"2026-01-21T00:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.249586 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 00:10:34 crc kubenswrapper[5118]: E0121 00:10:34.249751 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 00:10:35.249720312 +0000 UTC m=+90.573967330 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.249851 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/21105fbf-0225-4ba6-ba90-17808d5250c6-metrics-certs\") pod \"network-metrics-daemon-9hvtf\" (UID: \"21105fbf-0225-4ba6-ba90-17808d5250c6\") " pod="openshift-multus/network-metrics-daemon-9hvtf" Jan 21 00:10:34 crc kubenswrapper[5118]: E0121 00:10:34.250008 5118 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 00:10:34 crc kubenswrapper[5118]: E0121 00:10:34.250089 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/21105fbf-0225-4ba6-ba90-17808d5250c6-metrics-certs podName:21105fbf-0225-4ba6-ba90-17808d5250c6 nodeName:}" failed. No retries permitted until 2026-01-21 00:10:35.250067801 +0000 UTC m=+90.574314819 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/21105fbf-0225-4ba6-ba90-17808d5250c6-metrics-certs") pod "network-metrics-daemon-9hvtf" (UID: "21105fbf-0225-4ba6-ba90-17808d5250c6") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.255780 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.255841 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.255854 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.255872 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.255885 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:34Z","lastTransitionTime":"2026-01-21T00:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.300473 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h8fs2" event={"ID":"91e46657-55ca-43e7-9a43-6bb875c7debf","Type":"ContainerStarted","Data":"c0ec43a4f1b8caf57b219eb8283d87eadc74827c740b8e6a175c044f08150495"} Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.301475 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" event={"ID":"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc","Type":"ContainerStarted","Data":"644879c5a12ee5e6ee3edf2bd722a9b6a249bf9d2e4b58c6a66d37c62ffe51e0"} Jan 21 00:10:34 crc kubenswrapper[5118]: E0121 00:10:34.302355 5118 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 21 00:10:34 crc kubenswrapper[5118]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Jan 21 00:10:34 crc kubenswrapper[5118]: apiVersion: v1 Jan 21 00:10:34 crc kubenswrapper[5118]: clusters: Jan 21 00:10:34 crc kubenswrapper[5118]: - cluster: Jan 21 00:10:34 crc kubenswrapper[5118]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Jan 21 00:10:34 crc kubenswrapper[5118]: server: https://api-int.crc.testing:6443 Jan 21 00:10:34 crc kubenswrapper[5118]: name: default-cluster Jan 21 00:10:34 crc kubenswrapper[5118]: contexts: Jan 21 00:10:34 crc kubenswrapper[5118]: - context: Jan 21 00:10:34 crc kubenswrapper[5118]: cluster: default-cluster Jan 21 00:10:34 crc kubenswrapper[5118]: namespace: default Jan 21 00:10:34 crc kubenswrapper[5118]: user: default-auth Jan 21 00:10:34 crc kubenswrapper[5118]: name: default-context Jan 21 00:10:34 crc kubenswrapper[5118]: current-context: default-context Jan 21 00:10:34 crc kubenswrapper[5118]: kind: Config Jan 21 00:10:34 crc kubenswrapper[5118]: preferences: {} Jan 21 00:10:34 crc kubenswrapper[5118]: users: Jan 21 00:10:34 crc kubenswrapper[5118]: - name: default-auth Jan 21 00:10:34 crc kubenswrapper[5118]: user: Jan 21 00:10:34 crc kubenswrapper[5118]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Jan 21 00:10:34 crc kubenswrapper[5118]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Jan 21 00:10:34 crc kubenswrapper[5118]: EOF Jan 21 00:10:34 crc kubenswrapper[5118]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bfh6k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-h8fs2_openshift-ovn-kubernetes(91e46657-55ca-43e7-9a43-6bb875c7debf): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 21 00:10:34 crc kubenswrapper[5118]: > logger="UnhandledError" Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.302878 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-znhzw" event={"ID":"acee46d0-3d60-4d08-abbd-b3df00872f90","Type":"ContainerStarted","Data":"658c39f3febb8b0c05495d3296679de80589411623b0a05e71bcd62745618e9a"} Jan 21 00:10:34 crc kubenswrapper[5118]: E0121 00:10:34.303022 5118 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dsgwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-5jnd7_openshift-network-operator(428b39f5-eb1c-4f65-b7a4-eeb6e84860cc): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 21 00:10:34 crc kubenswrapper[5118]: E0121 00:10:34.303420 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-h8fs2" podUID="91e46657-55ca-43e7-9a43-6bb875c7debf" Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.303789 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-qcqwq" event={"ID":"7c0390f5-26b4-4299-958c-acac058be619","Type":"ContainerStarted","Data":"8dd2fbf760fad537990e407cbf51622178c978130ffcd13eab8a59b6701b54a2"} Jan 21 00:10:34 crc kubenswrapper[5118]: E0121 00:10:34.304102 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-5jnd7" podUID="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" Jan 21 00:10:34 crc kubenswrapper[5118]: E0121 00:10:34.304126 5118 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 21 00:10:34 crc kubenswrapper[5118]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/bin/bash -c #!/bin/bash Jan 21 00:10:34 crc kubenswrapper[5118]: set -uo pipefail Jan 21 00:10:34 crc kubenswrapper[5118]: Jan 21 00:10:34 crc kubenswrapper[5118]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Jan 21 00:10:34 crc kubenswrapper[5118]: Jan 21 00:10:34 crc kubenswrapper[5118]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Jan 21 00:10:34 crc kubenswrapper[5118]: HOSTS_FILE="/etc/hosts" Jan 21 00:10:34 crc kubenswrapper[5118]: TEMP_FILE="/tmp/hosts.tmp" Jan 21 00:10:34 crc kubenswrapper[5118]: Jan 21 00:10:34 crc kubenswrapper[5118]: IFS=', ' read -r -a services <<< "${SERVICES}" Jan 21 00:10:34 crc kubenswrapper[5118]: Jan 21 00:10:34 crc kubenswrapper[5118]: # Make a temporary file with the old hosts file's attributes. Jan 21 00:10:34 crc kubenswrapper[5118]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Jan 21 00:10:34 crc kubenswrapper[5118]: echo "Failed to preserve hosts file. Exiting." Jan 21 00:10:34 crc kubenswrapper[5118]: exit 1 Jan 21 00:10:34 crc kubenswrapper[5118]: fi Jan 21 00:10:34 crc kubenswrapper[5118]: Jan 21 00:10:34 crc kubenswrapper[5118]: while true; do Jan 21 00:10:34 crc kubenswrapper[5118]: declare -A svc_ips Jan 21 00:10:34 crc kubenswrapper[5118]: for svc in "${services[@]}"; do Jan 21 00:10:34 crc kubenswrapper[5118]: # Fetch service IP from cluster dns if present. We make several tries Jan 21 00:10:34 crc kubenswrapper[5118]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Jan 21 00:10:34 crc kubenswrapper[5118]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Jan 21 00:10:34 crc kubenswrapper[5118]: # support UDP loadbalancers and require reaching DNS through TCP. Jan 21 00:10:34 crc kubenswrapper[5118]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 21 00:10:34 crc kubenswrapper[5118]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 21 00:10:34 crc kubenswrapper[5118]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 21 00:10:34 crc kubenswrapper[5118]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Jan 21 00:10:34 crc kubenswrapper[5118]: for i in ${!cmds[*]} Jan 21 00:10:34 crc kubenswrapper[5118]: do Jan 21 00:10:34 crc kubenswrapper[5118]: ips=($(eval "${cmds[i]}")) Jan 21 00:10:34 crc kubenswrapper[5118]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Jan 21 00:10:34 crc kubenswrapper[5118]: svc_ips["${svc}"]="${ips[@]}" Jan 21 00:10:34 crc kubenswrapper[5118]: break Jan 21 00:10:34 crc kubenswrapper[5118]: fi Jan 21 00:10:34 crc kubenswrapper[5118]: done Jan 21 00:10:34 crc kubenswrapper[5118]: done Jan 21 00:10:34 crc kubenswrapper[5118]: Jan 21 00:10:34 crc kubenswrapper[5118]: # Update /etc/hosts only if we get valid service IPs Jan 21 00:10:34 crc kubenswrapper[5118]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Jan 21 00:10:34 crc kubenswrapper[5118]: # Stale entries could exist in /etc/hosts if the service is deleted Jan 21 00:10:34 crc kubenswrapper[5118]: if [[ -n "${svc_ips[*]-}" ]]; then Jan 21 00:10:34 crc kubenswrapper[5118]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Jan 21 00:10:34 crc kubenswrapper[5118]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Jan 21 00:10:34 crc kubenswrapper[5118]: # Only continue rebuilding the hosts entries if its original content is preserved Jan 21 00:10:34 crc kubenswrapper[5118]: sleep 60 & wait Jan 21 00:10:34 crc kubenswrapper[5118]: continue Jan 21 00:10:34 crc kubenswrapper[5118]: fi Jan 21 00:10:34 crc kubenswrapper[5118]: Jan 21 00:10:34 crc kubenswrapper[5118]: # Append resolver entries for services Jan 21 00:10:34 crc kubenswrapper[5118]: rc=0 Jan 21 00:10:34 crc kubenswrapper[5118]: for svc in "${!svc_ips[@]}"; do Jan 21 00:10:34 crc kubenswrapper[5118]: for ip in ${svc_ips[${svc}]}; do Jan 21 00:10:34 crc kubenswrapper[5118]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Jan 21 00:10:34 crc kubenswrapper[5118]: done Jan 21 00:10:34 crc kubenswrapper[5118]: done Jan 21 00:10:34 crc kubenswrapper[5118]: if [[ $rc -ne 0 ]]; then Jan 21 00:10:34 crc kubenswrapper[5118]: sleep 60 & wait Jan 21 00:10:34 crc kubenswrapper[5118]: continue Jan 21 00:10:34 crc kubenswrapper[5118]: fi Jan 21 00:10:34 crc kubenswrapper[5118]: Jan 21 00:10:34 crc kubenswrapper[5118]: Jan 21 00:10:34 crc kubenswrapper[5118]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Jan 21 00:10:34 crc kubenswrapper[5118]: # Replace /etc/hosts with our modified version if needed Jan 21 00:10:34 crc kubenswrapper[5118]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Jan 21 00:10:34 crc kubenswrapper[5118]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Jan 21 00:10:34 crc kubenswrapper[5118]: fi Jan 21 00:10:34 crc kubenswrapper[5118]: sleep 60 & wait Jan 21 00:10:34 crc kubenswrapper[5118]: unset svc_ips Jan 21 00:10:34 crc kubenswrapper[5118]: done Jan 21 00:10:34 crc kubenswrapper[5118]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-962nx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-znhzw_openshift-dns(acee46d0-3d60-4d08-abbd-b3df00872f90): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 21 00:10:34 crc kubenswrapper[5118]: > logger="UnhandledError" Jan 21 00:10:34 crc kubenswrapper[5118]: E0121 00:10:34.305431 5118 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 21 00:10:34 crc kubenswrapper[5118]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Jan 21 00:10:34 crc kubenswrapper[5118]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Jan 21 00:10:34 crc kubenswrapper[5118]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j5t5k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-qcqwq_openshift-multus(7c0390f5-26b4-4299-958c-acac058be619): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 21 00:10:34 crc kubenswrapper[5118]: > logger="UnhandledError" Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.306362 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-d4lsz" event={"ID":"0541bb33-5d4a-4ef9-964c-884c727499f6","Type":"ContainerStarted","Data":"01a48b4f59bd74f65bd0cf238892d11c439cdcc8adbf67126d810bc22850aa23"} Jan 21 00:10:34 crc kubenswrapper[5118]: E0121 00:10:34.306619 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-znhzw" podUID="acee46d0-3d60-4d08-abbd-b3df00872f90" Jan 21 00:10:34 crc kubenswrapper[5118]: E0121 00:10:34.306620 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-qcqwq" podUID="7c0390f5-26b4-4299-958c-acac058be619" Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.307513 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" event={"ID":"44eb9bc7-60a3-421c-bf5e-d1d9a5026435","Type":"ContainerStarted","Data":"b7d5eff41331448c6aab51da094d45af4b203db2736cf43fde65fd897f5f7670"} Jan 21 00:10:34 crc kubenswrapper[5118]: E0121 00:10:34.307514 5118 kuberuntime_manager.go:1358] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qmjvj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-d4lsz_openshift-multus(0541bb33-5d4a-4ef9-964c-884c727499f6): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.308464 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kzdr6" event={"ID":"ddc3c284-5d85-4e40-b285-f16062ad8d9c","Type":"ContainerStarted","Data":"571babcd7c15278c84d993cda54ba05119616b4820b1104f1e0cd4bc0b5e5b9d"} Jan 21 00:10:34 crc kubenswrapper[5118]: E0121 00:10:34.308598 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-d4lsz" podUID="0541bb33-5d4a-4ef9-964c-884c727499f6" Jan 21 00:10:34 crc kubenswrapper[5118]: E0121 00:10:34.309420 5118 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.20.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vxsjl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-22r9n_openshift-machine-config-operator(44eb9bc7-60a3-421c-bf5e-d1d9a5026435): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.309482 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-9sftt" event={"ID":"50c45c57-9291-48d3-8022-00a314541104","Type":"ContainerStarted","Data":"50c376fd21b4867e278b2c0525785667d3c57803336d707cd2343ab8165ef725"} Jan 21 00:10:34 crc kubenswrapper[5118]: E0121 00:10:34.309972 5118 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 21 00:10:34 crc kubenswrapper[5118]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[/bin/bash -c #!/bin/bash Jan 21 00:10:34 crc kubenswrapper[5118]: set -euo pipefail Jan 21 00:10:34 crc kubenswrapper[5118]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Jan 21 00:10:34 crc kubenswrapper[5118]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Jan 21 00:10:34 crc kubenswrapper[5118]: # As the secret mount is optional we must wait for the files to be present. Jan 21 00:10:34 crc kubenswrapper[5118]: # The service is created in monitor.yaml and this is created in sdn.yaml. Jan 21 00:10:34 crc kubenswrapper[5118]: TS=$(date +%s) Jan 21 00:10:34 crc kubenswrapper[5118]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Jan 21 00:10:34 crc kubenswrapper[5118]: HAS_LOGGED_INFO=0 Jan 21 00:10:34 crc kubenswrapper[5118]: Jan 21 00:10:34 crc kubenswrapper[5118]: log_missing_certs(){ Jan 21 00:10:34 crc kubenswrapper[5118]: CUR_TS=$(date +%s) Jan 21 00:10:34 crc kubenswrapper[5118]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Jan 21 00:10:34 crc kubenswrapper[5118]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Jan 21 00:10:34 crc kubenswrapper[5118]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Jan 21 00:10:34 crc kubenswrapper[5118]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Jan 21 00:10:34 crc kubenswrapper[5118]: HAS_LOGGED_INFO=1 Jan 21 00:10:34 crc kubenswrapper[5118]: fi Jan 21 00:10:34 crc kubenswrapper[5118]: } Jan 21 00:10:34 crc kubenswrapper[5118]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Jan 21 00:10:34 crc kubenswrapper[5118]: log_missing_certs Jan 21 00:10:34 crc kubenswrapper[5118]: sleep 5 Jan 21 00:10:34 crc kubenswrapper[5118]: done Jan 21 00:10:34 crc kubenswrapper[5118]: Jan 21 00:10:34 crc kubenswrapper[5118]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Jan 21 00:10:34 crc kubenswrapper[5118]: exec /usr/bin/kube-rbac-proxy \ Jan 21 00:10:34 crc kubenswrapper[5118]: --logtostderr \ Jan 21 00:10:34 crc kubenswrapper[5118]: --secure-listen-address=:9108 \ Jan 21 00:10:34 crc kubenswrapper[5118]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Jan 21 00:10:34 crc kubenswrapper[5118]: --upstream=http://127.0.0.1:29108/ \ Jan 21 00:10:34 crc kubenswrapper[5118]: --tls-private-key-file=${TLS_PK} \ Jan 21 00:10:34 crc kubenswrapper[5118]: --tls-cert-file=${TLS_CERT} Jan 21 00:10:34 crc kubenswrapper[5118]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fzdws,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-kzdr6_openshift-ovn-kubernetes(ddc3c284-5d85-4e40-b285-f16062ad8d9c): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 21 00:10:34 crc kubenswrapper[5118]: > logger="UnhandledError" Jan 21 00:10:34 crc kubenswrapper[5118]: E0121 00:10:34.310550 5118 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 21 00:10:34 crc kubenswrapper[5118]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Jan 21 00:10:34 crc kubenswrapper[5118]: if [[ -f "/env/_master" ]]; then Jan 21 00:10:34 crc kubenswrapper[5118]: set -o allexport Jan 21 00:10:34 crc kubenswrapper[5118]: source "/env/_master" Jan 21 00:10:34 crc kubenswrapper[5118]: set +o allexport Jan 21 00:10:34 crc kubenswrapper[5118]: fi Jan 21 00:10:34 crc kubenswrapper[5118]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Jan 21 00:10:34 crc kubenswrapper[5118]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Jan 21 00:10:34 crc kubenswrapper[5118]: ho_enable="--enable-hybrid-overlay" Jan 21 00:10:34 crc kubenswrapper[5118]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Jan 21 00:10:34 crc kubenswrapper[5118]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Jan 21 00:10:34 crc kubenswrapper[5118]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Jan 21 00:10:34 crc kubenswrapper[5118]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Jan 21 00:10:34 crc kubenswrapper[5118]: --webhook-cert-dir="/etc/webhook-cert" \ Jan 21 00:10:34 crc kubenswrapper[5118]: --webhook-host=127.0.0.1 \ Jan 21 00:10:34 crc kubenswrapper[5118]: --webhook-port=9743 \ Jan 21 00:10:34 crc kubenswrapper[5118]: ${ho_enable} \ Jan 21 00:10:34 crc kubenswrapper[5118]: --enable-interconnect \ Jan 21 00:10:34 crc kubenswrapper[5118]: --disable-approver \ Jan 21 00:10:34 crc kubenswrapper[5118]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Jan 21 00:10:34 crc kubenswrapper[5118]: --wait-for-kubernetes-api=200s \ Jan 21 00:10:34 crc kubenswrapper[5118]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Jan 21 00:10:34 crc kubenswrapper[5118]: --loglevel="${LOGLEVEL}" Jan 21 00:10:34 crc kubenswrapper[5118]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 21 00:10:34 crc kubenswrapper[5118]: > logger="UnhandledError" Jan 21 00:10:34 crc kubenswrapper[5118]: E0121 00:10:34.310970 5118 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 21 00:10:34 crc kubenswrapper[5118]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Jan 21 00:10:34 crc kubenswrapper[5118]: while [ true ]; Jan 21 00:10:34 crc kubenswrapper[5118]: do Jan 21 00:10:34 crc kubenswrapper[5118]: for f in $(ls /tmp/serviceca); do Jan 21 00:10:34 crc kubenswrapper[5118]: echo $f Jan 21 00:10:34 crc kubenswrapper[5118]: ca_file_path="/tmp/serviceca/${f}" Jan 21 00:10:34 crc kubenswrapper[5118]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Jan 21 00:10:34 crc kubenswrapper[5118]: reg_dir_path="/etc/docker/certs.d/${f}" Jan 21 00:10:34 crc kubenswrapper[5118]: if [ -e "${reg_dir_path}" ]; then Jan 21 00:10:34 crc kubenswrapper[5118]: cp -u $ca_file_path $reg_dir_path/ca.crt Jan 21 00:10:34 crc kubenswrapper[5118]: else Jan 21 00:10:34 crc kubenswrapper[5118]: mkdir $reg_dir_path Jan 21 00:10:34 crc kubenswrapper[5118]: cp $ca_file_path $reg_dir_path/ca.crt Jan 21 00:10:34 crc kubenswrapper[5118]: fi Jan 21 00:10:34 crc kubenswrapper[5118]: done Jan 21 00:10:34 crc kubenswrapper[5118]: for d in $(ls /etc/docker/certs.d); do Jan 21 00:10:34 crc kubenswrapper[5118]: echo $d Jan 21 00:10:34 crc kubenswrapper[5118]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Jan 21 00:10:34 crc kubenswrapper[5118]: reg_conf_path="/tmp/serviceca/${dp}" Jan 21 00:10:34 crc kubenswrapper[5118]: if [ ! -e "${reg_conf_path}" ]; then Jan 21 00:10:34 crc kubenswrapper[5118]: rm -rf /etc/docker/certs.d/$d Jan 21 00:10:34 crc kubenswrapper[5118]: fi Jan 21 00:10:34 crc kubenswrapper[5118]: done Jan 21 00:10:34 crc kubenswrapper[5118]: sleep 60 & wait ${!} Jan 21 00:10:34 crc kubenswrapper[5118]: done Jan 21 00:10:34 crc kubenswrapper[5118]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j5fh5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-9sftt_openshift-image-registry(50c45c57-9291-48d3-8022-00a314541104): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 21 00:10:34 crc kubenswrapper[5118]: > logger="UnhandledError" Jan 21 00:10:34 crc kubenswrapper[5118]: E0121 00:10:34.311514 5118 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vxsjl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-22r9n_openshift-machine-config-operator(44eb9bc7-60a3-421c-bf5e-d1d9a5026435): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.312009 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea90c3b6-90f2-4468-8987-cbc4691535cc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://01a6d01cbabb92bffcca05eb808b4bd0bee991f66f129422707d982e4e3d320f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://37fdbdbec8b545e1b3921af5413cad07f8ffa20745589533bc0fffa6ec9a42fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://0f3214d25bbdd49a8a29ce6f30a600024d862102e53bee5c64ac3f0880d97481\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d4b43bf3081fdf5d3820618be5c2c1f9db8f1b4b344beee6c6fb8f37064a0308\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4b43bf3081fdf5d3820618be5c2c1f9db8f1b4b344beee6c6fb8f37064a0308\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:09:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:09:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:09:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:34 crc kubenswrapper[5118]: E0121 00:10:34.312209 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-9sftt" podUID="50c45c57-9291-48d3-8022-00a314541104" Jan 21 00:10:34 crc kubenswrapper[5118]: E0121 00:10:34.312624 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" Jan 21 00:10:34 crc kubenswrapper[5118]: E0121 00:10:34.312771 5118 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 21 00:10:34 crc kubenswrapper[5118]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Jan 21 00:10:34 crc kubenswrapper[5118]: if [[ -f "/env/_master" ]]; then Jan 21 00:10:34 crc kubenswrapper[5118]: set -o allexport Jan 21 00:10:34 crc kubenswrapper[5118]: source "/env/_master" Jan 21 00:10:34 crc kubenswrapper[5118]: set +o allexport Jan 21 00:10:34 crc kubenswrapper[5118]: fi Jan 21 00:10:34 crc kubenswrapper[5118]: Jan 21 00:10:34 crc kubenswrapper[5118]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Jan 21 00:10:34 crc kubenswrapper[5118]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Jan 21 00:10:34 crc kubenswrapper[5118]: --disable-webhook \ Jan 21 00:10:34 crc kubenswrapper[5118]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Jan 21 00:10:34 crc kubenswrapper[5118]: --loglevel="${LOGLEVEL}" Jan 21 00:10:34 crc kubenswrapper[5118]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 21 00:10:34 crc kubenswrapper[5118]: > logger="UnhandledError" Jan 21 00:10:34 crc kubenswrapper[5118]: E0121 00:10:34.313080 5118 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 21 00:10:34 crc kubenswrapper[5118]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Jan 21 00:10:34 crc kubenswrapper[5118]: if [[ -f "/env/_master" ]]; then Jan 21 00:10:34 crc kubenswrapper[5118]: set -o allexport Jan 21 00:10:34 crc kubenswrapper[5118]: source "/env/_master" Jan 21 00:10:34 crc kubenswrapper[5118]: set +o allexport Jan 21 00:10:34 crc kubenswrapper[5118]: fi Jan 21 00:10:34 crc kubenswrapper[5118]: Jan 21 00:10:34 crc kubenswrapper[5118]: ovn_v4_join_subnet_opt= Jan 21 00:10:34 crc kubenswrapper[5118]: if [[ "" != "" ]]; then Jan 21 00:10:34 crc kubenswrapper[5118]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Jan 21 00:10:34 crc kubenswrapper[5118]: fi Jan 21 00:10:34 crc kubenswrapper[5118]: ovn_v6_join_subnet_opt= Jan 21 00:10:34 crc kubenswrapper[5118]: if [[ "" != "" ]]; then Jan 21 00:10:34 crc kubenswrapper[5118]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Jan 21 00:10:34 crc kubenswrapper[5118]: fi Jan 21 00:10:34 crc kubenswrapper[5118]: Jan 21 00:10:34 crc kubenswrapper[5118]: ovn_v4_transit_switch_subnet_opt= Jan 21 00:10:34 crc kubenswrapper[5118]: if [[ "" != "" ]]; then Jan 21 00:10:34 crc kubenswrapper[5118]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Jan 21 00:10:34 crc kubenswrapper[5118]: fi Jan 21 00:10:34 crc kubenswrapper[5118]: ovn_v6_transit_switch_subnet_opt= Jan 21 00:10:34 crc kubenswrapper[5118]: if [[ "" != "" ]]; then Jan 21 00:10:34 crc kubenswrapper[5118]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Jan 21 00:10:34 crc kubenswrapper[5118]: fi Jan 21 00:10:34 crc kubenswrapper[5118]: Jan 21 00:10:34 crc kubenswrapper[5118]: dns_name_resolver_enabled_flag= Jan 21 00:10:34 crc kubenswrapper[5118]: if [[ "false" == "true" ]]; then Jan 21 00:10:34 crc kubenswrapper[5118]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Jan 21 00:10:34 crc kubenswrapper[5118]: fi Jan 21 00:10:34 crc kubenswrapper[5118]: Jan 21 00:10:34 crc kubenswrapper[5118]: persistent_ips_enabled_flag="--enable-persistent-ips" Jan 21 00:10:34 crc kubenswrapper[5118]: Jan 21 00:10:34 crc kubenswrapper[5118]: # This is needed so that converting clusters from GA to TP Jan 21 00:10:34 crc kubenswrapper[5118]: # will rollout control plane pods as well Jan 21 00:10:34 crc kubenswrapper[5118]: network_segmentation_enabled_flag= Jan 21 00:10:34 crc kubenswrapper[5118]: multi_network_enabled_flag= Jan 21 00:10:34 crc kubenswrapper[5118]: if [[ "true" == "true" ]]; then Jan 21 00:10:34 crc kubenswrapper[5118]: multi_network_enabled_flag="--enable-multi-network" Jan 21 00:10:34 crc kubenswrapper[5118]: fi Jan 21 00:10:34 crc kubenswrapper[5118]: if [[ "true" == "true" ]]; then Jan 21 00:10:34 crc kubenswrapper[5118]: if [[ "true" != "true" ]]; then Jan 21 00:10:34 crc kubenswrapper[5118]: multi_network_enabled_flag="--enable-multi-network" Jan 21 00:10:34 crc kubenswrapper[5118]: fi Jan 21 00:10:34 crc kubenswrapper[5118]: network_segmentation_enabled_flag="--enable-network-segmentation" Jan 21 00:10:34 crc kubenswrapper[5118]: fi Jan 21 00:10:34 crc kubenswrapper[5118]: Jan 21 00:10:34 crc kubenswrapper[5118]: route_advertisements_enable_flag= Jan 21 00:10:34 crc kubenswrapper[5118]: if [[ "false" == "true" ]]; then Jan 21 00:10:34 crc kubenswrapper[5118]: route_advertisements_enable_flag="--enable-route-advertisements" Jan 21 00:10:34 crc kubenswrapper[5118]: fi Jan 21 00:10:34 crc kubenswrapper[5118]: Jan 21 00:10:34 crc kubenswrapper[5118]: preconfigured_udn_addresses_enable_flag= Jan 21 00:10:34 crc kubenswrapper[5118]: if [[ "false" == "true" ]]; then Jan 21 00:10:34 crc kubenswrapper[5118]: preconfigured_udn_addresses_enable_flag="--enable-preconfigured-udn-addresses" Jan 21 00:10:34 crc kubenswrapper[5118]: fi Jan 21 00:10:34 crc kubenswrapper[5118]: Jan 21 00:10:34 crc kubenswrapper[5118]: # Enable multi-network policy if configured (control-plane always full mode) Jan 21 00:10:34 crc kubenswrapper[5118]: multi_network_policy_enabled_flag= Jan 21 00:10:34 crc kubenswrapper[5118]: if [[ "false" == "true" ]]; then Jan 21 00:10:34 crc kubenswrapper[5118]: multi_network_policy_enabled_flag="--enable-multi-networkpolicy" Jan 21 00:10:34 crc kubenswrapper[5118]: fi Jan 21 00:10:34 crc kubenswrapper[5118]: Jan 21 00:10:34 crc kubenswrapper[5118]: # Enable admin network policy if configured (control-plane always full mode) Jan 21 00:10:34 crc kubenswrapper[5118]: admin_network_policy_enabled_flag= Jan 21 00:10:34 crc kubenswrapper[5118]: if [[ "true" == "true" ]]; then Jan 21 00:10:34 crc kubenswrapper[5118]: admin_network_policy_enabled_flag="--enable-admin-network-policy" Jan 21 00:10:34 crc kubenswrapper[5118]: fi Jan 21 00:10:34 crc kubenswrapper[5118]: Jan 21 00:10:34 crc kubenswrapper[5118]: if [ "shared" == "shared" ]; then Jan 21 00:10:34 crc kubenswrapper[5118]: gateway_mode_flags="--gateway-mode shared" Jan 21 00:10:34 crc kubenswrapper[5118]: elif [ "shared" == "local" ]; then Jan 21 00:10:34 crc kubenswrapper[5118]: gateway_mode_flags="--gateway-mode local" Jan 21 00:10:34 crc kubenswrapper[5118]: else Jan 21 00:10:34 crc kubenswrapper[5118]: echo "Invalid OVN_GATEWAY_MODE: \"shared\". Must be \"local\" or \"shared\"." Jan 21 00:10:34 crc kubenswrapper[5118]: exit 1 Jan 21 00:10:34 crc kubenswrapper[5118]: fi Jan 21 00:10:34 crc kubenswrapper[5118]: Jan 21 00:10:34 crc kubenswrapper[5118]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Jan 21 00:10:34 crc kubenswrapper[5118]: exec /usr/bin/ovnkube \ Jan 21 00:10:34 crc kubenswrapper[5118]: --enable-interconnect \ Jan 21 00:10:34 crc kubenswrapper[5118]: --init-cluster-manager "${K8S_NODE}" \ Jan 21 00:10:34 crc kubenswrapper[5118]: --config-file=/run/ovnkube-config/ovnkube.conf \ Jan 21 00:10:34 crc kubenswrapper[5118]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Jan 21 00:10:34 crc kubenswrapper[5118]: --metrics-bind-address "127.0.0.1:29108" \ Jan 21 00:10:34 crc kubenswrapper[5118]: --metrics-enable-pprof \ Jan 21 00:10:34 crc kubenswrapper[5118]: --metrics-enable-config-duration \ Jan 21 00:10:34 crc kubenswrapper[5118]: ${ovn_v4_join_subnet_opt} \ Jan 21 00:10:34 crc kubenswrapper[5118]: ${ovn_v6_join_subnet_opt} \ Jan 21 00:10:34 crc kubenswrapper[5118]: ${ovn_v4_transit_switch_subnet_opt} \ Jan 21 00:10:34 crc kubenswrapper[5118]: ${ovn_v6_transit_switch_subnet_opt} \ Jan 21 00:10:34 crc kubenswrapper[5118]: ${dns_name_resolver_enabled_flag} \ Jan 21 00:10:34 crc kubenswrapper[5118]: ${persistent_ips_enabled_flag} \ Jan 21 00:10:34 crc kubenswrapper[5118]: ${multi_network_enabled_flag} \ Jan 21 00:10:34 crc kubenswrapper[5118]: ${network_segmentation_enabled_flag} \ Jan 21 00:10:34 crc kubenswrapper[5118]: ${gateway_mode_flags} \ Jan 21 00:10:34 crc kubenswrapper[5118]: ${route_advertisements_enable_flag} \ Jan 21 00:10:34 crc kubenswrapper[5118]: ${preconfigured_udn_addresses_enable_flag} \ Jan 21 00:10:34 crc kubenswrapper[5118]: --enable-egress-ip=true \ Jan 21 00:10:34 crc kubenswrapper[5118]: --enable-egress-firewall=true \ Jan 21 00:10:34 crc kubenswrapper[5118]: --enable-egress-qos=true \ Jan 21 00:10:34 crc kubenswrapper[5118]: --enable-egress-service=true \ Jan 21 00:10:34 crc kubenswrapper[5118]: --enable-multicast \ Jan 21 00:10:34 crc kubenswrapper[5118]: --enable-multi-external-gateway=true \ Jan 21 00:10:34 crc kubenswrapper[5118]: ${multi_network_policy_enabled_flag} \ Jan 21 00:10:34 crc kubenswrapper[5118]: ${admin_network_policy_enabled_flag} Jan 21 00:10:34 crc kubenswrapper[5118]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fzdws,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-kzdr6_openshift-ovn-kubernetes(ddc3c284-5d85-4e40-b285-f16062ad8d9c): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 21 00:10:34 crc kubenswrapper[5118]: > logger="UnhandledError" Jan 21 00:10:34 crc kubenswrapper[5118]: E0121 00:10:34.314292 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kzdr6" podUID="ddc3c284-5d85-4e40-b285-f16062ad8d9c" Jan 21 00:10:34 crc kubenswrapper[5118]: E0121 00:10:34.315211 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-dgvkt" podUID="fc4541ce-7789-4670-bc75-5c2868e52ce0" Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.323563 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.337007 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.347008 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-qcqwq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c0390f5-26b4-4299-958c-acac058be619\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5t5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:10:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qcqwq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.359141 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.359199 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.359208 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.359223 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.359233 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:34Z","lastTransitionTime":"2026-01-21T00:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.364866 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8e2c6b8-bbac-4c8c-98aa-eed95855d358\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://9d9f0111a3537cc924a7e201bcd1e6a41bc82e79b86ec8f1d33560c518239fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://9c43bace9e1ec4b78fc3886b886cfc9eb9505e5cd415b54a393092a5fb6bfede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://efa0534fa57e4334809de905bc9c6076a74ca99b2829d2716055befea0eb99ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://898860c9529a12085df4c5531acb1bd4f2bf2dc8acc40c795bef9e642ab80c73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://cde9ebfec14b67069eee7df51b0b8e257d4b7ccb5fc744f7cf08722b62167f08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4d69bc807ea39c1b6e472b26c0ed0618f1da285bdc9fd839f01a7c311aa34ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d69bc807ea39c1b6e472b26c0ed0618f1da285bdc9fd839f01a7c311aa34ef4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:09:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:09:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://afa5b4348d36675413cb2b023cbf9c78e23c0ee25d1298fecd6ef727c36d64ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa5b4348d36675413cb2b023cbf9c78e23c0ee25d1298fecd6ef727c36d64ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:09:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:09:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://b755b446899236ed8c24e77229b7d2ca8cec62643e7c60e7d77fd8a26bdea1ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b755b446899236ed8c24e77229b7d2ca8cec62643e7c60e7d77fd8a26bdea1ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:09:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:09:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:09:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.373750 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44eb9bc7-60a3-421c-bf5e-d1d9a5026435\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxsjl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxsjl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:10:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-22r9n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.380808 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9hvtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21105fbf-0225-4ba6-ba90-17808d5250c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjsv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjsv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:10:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9hvtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.389004 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kzdr6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddc3c284-5d85-4e40-b285-f16062ad8d9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzdws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzdws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:10:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-kzdr6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.396128 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-znhzw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acee46d0-3d60-4d08-abbd-b3df00872f90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-962nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:10:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-znhzw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.403201 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff4c6b76-3326-4edc-a392-9edcaf197d60\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://69e28ae0052129054be6c0419161beea094bafc8c1cbcdcf5bf3436e7877d421\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b22458e11edc4b0d627c073d6b85af24b2b8e00bc13359490fd9ffac95677b1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b22458e11edc4b0d627c073d6b85af24b2b8e00bc13359490fd9ffac95677b1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:09:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:09:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:09:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.420937 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.430001 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:32Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.443282 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d4lsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0541bb33-5d4a-4ef9-964c-884c727499f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:10:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d4lsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.487578 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82b75e4d-eb03-4a0f-b349-9596c36b1f7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://fe7f2363564030a94c2b5aaa62bb4c132cd16d168cb151c5b3b9447b2e1be495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e4ebe604136e4069bea4c85be28e37c74101654b01f8f97f9dd6c5e6d2089369\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://958df862a38bf6ffb6e4b273a5f651ea46a717132f869a0fffedeb2ed05ad6a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4192feefd5cfa0d044d233324b963c0d52fc469217c4faad785dda349a30a38b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4192feefd5cfa0d044d233324b963c0d52fc469217c4faad785dda349a30a38b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T00:10:07Z\\\",\\\"message\\\":\\\"172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nW0121 00:10:06.988883 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 00:10:06.989033 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0121 00:10:06.989980 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1710353325/tls.crt::/tmp/serving-cert-1710353325/tls.key\\\\\\\"\\\\nI0121 00:10:07.300917 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 00:10:07.302784 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 00:10:07.302805 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 00:10:07.302832 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 00:10:07.302838 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 00:10:07.306381 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 00:10:07.306408 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 00:10:07.306414 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 00:10:07.306419 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 00:10:07.306422 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 00:10:07.306426 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 00:10:07.306429 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 00:10:07.306560 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 00:10:07.307535 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T00:10:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5420e1997e1749ea320776a71a52444ce92ddbd5ca9427f3c9efe5b951c27370\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://fe33373ea281d91b409c5cb5d649102effdff8d81e629af326cf6ec982abd031\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe33373ea281d91b409c5cb5d649102effdff8d81e629af326cf6ec982abd031\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:09:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:09:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:09:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.489794 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.489840 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.489861 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.489879 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.489890 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:34Z","lastTransitionTime":"2026-01-21T00:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.502675 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:32Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.513470 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.526182 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h8fs2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91e46657-55ca-43e7-9a43-6bb875c7debf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfh6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfh6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfh6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfh6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfh6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfh6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfh6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfh6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfh6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:10:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h8fs2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.533141 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9sftt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50c45c57-9291-48d3-8022-00a314541104\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5fh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:10:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9sftt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.541154 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96ad38d8-0631-494b-8a0c-73936655173c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://7da13cdac196a74d6f3d3fe06fd8b8f1b93152d831e98ee1b66f4bd30f77756b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://f71ef5ad6b3cecbe91cf0fa1e4e8759ddda878222a1c71e9801313336e424458\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://531e890ac624829dfeab5674374a20bf8f80e96fe3ad6baff6532501d078f297\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://93e48b61d0a2e616f65259ffbca42d9d000600a9f57c456e9fafc249cbbfa187\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:09:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.553550 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.553617 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 00:10:34 crc kubenswrapper[5118]: E0121 00:10:34.553741 5118 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 00:10:34 crc kubenswrapper[5118]: E0121 00:10:34.553772 5118 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 00:10:34 crc kubenswrapper[5118]: E0121 00:10:34.553805 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-21 00:10:36.553791865 +0000 UTC m=+91.878038883 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 00:10:34 crc kubenswrapper[5118]: E0121 00:10:34.553864 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-21 00:10:36.553842196 +0000 UTC m=+91.878089274 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.557305 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8e2c6b8-bbac-4c8c-98aa-eed95855d358\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://9d9f0111a3537cc924a7e201bcd1e6a41bc82e79b86ec8f1d33560c518239fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://9c43bace9e1ec4b78fc3886b886cfc9eb9505e5cd415b54a393092a5fb6bfede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://efa0534fa57e4334809de905bc9c6076a74ca99b2829d2716055befea0eb99ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://898860c9529a12085df4c5531acb1bd4f2bf2dc8acc40c795bef9e642ab80c73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://cde9ebfec14b67069eee7df51b0b8e257d4b7ccb5fc744f7cf08722b62167f08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4d69bc807ea39c1b6e472b26c0ed0618f1da285bdc9fd839f01a7c311aa34ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d69bc807ea39c1b6e472b26c0ed0618f1da285bdc9fd839f01a7c311aa34ef4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:09:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:09:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://afa5b4348d36675413cb2b023cbf9c78e23c0ee25d1298fecd6ef727c36d64ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa5b4348d36675413cb2b023cbf9c78e23c0ee25d1298fecd6ef727c36d64ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:09:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:09:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://b755b446899236ed8c24e77229b7d2ca8cec62643e7c60e7d77fd8a26bdea1ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b755b446899236ed8c24e77229b7d2ca8cec62643e7c60e7d77fd8a26bdea1ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:09:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:09:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:09:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.566176 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44eb9bc7-60a3-421c-bf5e-d1d9a5026435\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxsjl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxsjl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:10:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-22r9n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.573220 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9hvtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21105fbf-0225-4ba6-ba90-17808d5250c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjsv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjsv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:10:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9hvtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.582805 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kzdr6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddc3c284-5d85-4e40-b285-f16062ad8d9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzdws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzdws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:10:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-kzdr6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.591227 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.591286 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.591302 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.591319 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.591332 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:34Z","lastTransitionTime":"2026-01-21T00:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.591250 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-znhzw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acee46d0-3d60-4d08-abbd-b3df00872f90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-962nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:10:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-znhzw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.599441 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff4c6b76-3326-4edc-a392-9edcaf197d60\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://69e28ae0052129054be6c0419161beea094bafc8c1cbcdcf5bf3436e7877d421\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b22458e11edc4b0d627c073d6b85af24b2b8e00bc13359490fd9ffac95677b1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b22458e11edc4b0d627c073d6b85af24b2b8e00bc13359490fd9ffac95677b1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:09:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:09:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:09:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.615606 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.654945 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.655287 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 21 00:10:34 crc kubenswrapper[5118]: E0121 00:10:34.655186 5118 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 00:10:34 crc kubenswrapper[5118]: E0121 00:10:34.655642 5118 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 00:10:34 crc kubenswrapper[5118]: E0121 00:10:34.655743 5118 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 00:10:34 crc kubenswrapper[5118]: E0121 00:10:34.655481 5118 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 00:10:34 crc kubenswrapper[5118]: E0121 00:10:34.655926 5118 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 00:10:34 crc kubenswrapper[5118]: E0121 00:10:34.655939 5118 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 00:10:34 crc kubenswrapper[5118]: E0121 00:10:34.655905 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-21 00:10:36.655880794 +0000 UTC m=+91.980127832 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 00:10:34 crc kubenswrapper[5118]: E0121 00:10:34.656315 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-21 00:10:36.656291605 +0000 UTC m=+91.980538653 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.659799 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:32Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.693727 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.693778 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.693791 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.693808 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.693819 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:34Z","lastTransitionTime":"2026-01-21T00:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.700199 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d4lsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0541bb33-5d4a-4ef9-964c-884c727499f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:10:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d4lsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.738662 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82b75e4d-eb03-4a0f-b349-9596c36b1f7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://fe7f2363564030a94c2b5aaa62bb4c132cd16d168cb151c5b3b9447b2e1be495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e4ebe604136e4069bea4c85be28e37c74101654b01f8f97f9dd6c5e6d2089369\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://958df862a38bf6ffb6e4b273a5f651ea46a717132f869a0fffedeb2ed05ad6a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4192feefd5cfa0d044d233324b963c0d52fc469217c4faad785dda349a30a38b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4192feefd5cfa0d044d233324b963c0d52fc469217c4faad785dda349a30a38b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T00:10:07Z\\\",\\\"message\\\":\\\"172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nW0121 00:10:06.988883 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 00:10:06.989033 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0121 00:10:06.989980 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1710353325/tls.crt::/tmp/serving-cert-1710353325/tls.key\\\\\\\"\\\\nI0121 00:10:07.300917 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 00:10:07.302784 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 00:10:07.302805 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 00:10:07.302832 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 00:10:07.302838 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 00:10:07.306381 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 00:10:07.306408 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 00:10:07.306414 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 00:10:07.306419 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 00:10:07.306422 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 00:10:07.306426 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 00:10:07.306429 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 00:10:07.306560 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 00:10:07.307535 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T00:10:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5420e1997e1749ea320776a71a52444ce92ddbd5ca9427f3c9efe5b951c27370\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://fe33373ea281d91b409c5cb5d649102effdff8d81e629af326cf6ec982abd031\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe33373ea281d91b409c5cb5d649102effdff8d81e629af326cf6ec982abd031\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:09:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:09:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:09:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.780348 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:32Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.796125 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.796192 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.796205 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.796219 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.796228 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:34Z","lastTransitionTime":"2026-01-21T00:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.817092 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.861883 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h8fs2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91e46657-55ca-43e7-9a43-6bb875c7debf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfh6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfh6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfh6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfh6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfh6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfh6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfh6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfh6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfh6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:10:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h8fs2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.898406 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9sftt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50c45c57-9291-48d3-8022-00a314541104\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5fh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:10:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9sftt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.899505 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.899539 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.899549 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.899564 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.899574 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:34Z","lastTransitionTime":"2026-01-21T00:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.938673 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96ad38d8-0631-494b-8a0c-73936655173c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://7da13cdac196a74d6f3d3fe06fd8b8f1b93152d831e98ee1b66f4bd30f77756b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://f71ef5ad6b3cecbe91cf0fa1e4e8759ddda878222a1c71e9801313336e424458\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://531e890ac624829dfeab5674374a20bf8f80e96fe3ad6baff6532501d078f297\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://93e48b61d0a2e616f65259ffbca42d9d000600a9f57c456e9fafc249cbbfa187\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:09:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.974959 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9hvtf" Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.974972 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.975116 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.975130 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 00:10:34 crc kubenswrapper[5118]: E0121 00:10:34.975125 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9hvtf" podUID="21105fbf-0225-4ba6-ba90-17808d5250c6" Jan 21 00:10:34 crc kubenswrapper[5118]: E0121 00:10:34.975255 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 21 00:10:34 crc kubenswrapper[5118]: E0121 00:10:34.975318 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 21 00:10:34 crc kubenswrapper[5118]: E0121 00:10:34.975363 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.977286 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea90c3b6-90f2-4468-8987-cbc4691535cc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://01a6d01cbabb92bffcca05eb808b4bd0bee991f66f129422707d982e4e3d320f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://37fdbdbec8b545e1b3921af5413cad07f8ffa20745589533bc0fffa6ec9a42fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://0f3214d25bbdd49a8a29ce6f30a600024d862102e53bee5c64ac3f0880d97481\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d4b43bf3081fdf5d3820618be5c2c1f9db8f1b4b344beee6c6fb8f37064a0308\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4b43bf3081fdf5d3820618be5c2c1f9db8f1b4b344beee6c6fb8f37064a0308\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:09:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:09:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:09:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.979007 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01080b46-74f1-4191-8755-5152a57b3b25" path="/var/lib/kubelet/pods/01080b46-74f1-4191-8755-5152a57b3b25/volumes" Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.979618 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09cfa50b-4138-4585-a53e-64dd3ab73335" path="/var/lib/kubelet/pods/09cfa50b-4138-4585-a53e-64dd3ab73335/volumes" Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.980925 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" path="/var/lib/kubelet/pods/0dd0fbac-8c0d-4228-8faa-abbeedabf7db/volumes" Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.982594 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0effdbcf-dd7d-404d-9d48-77536d665a5d" path="/var/lib/kubelet/pods/0effdbcf-dd7d-404d-9d48-77536d665a5d/volumes" Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.984279 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="149b3c48-e17c-4a66-a835-d86dabf6ff13" path="/var/lib/kubelet/pods/149b3c48-e17c-4a66-a835-d86dabf6ff13/volumes" Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.985516 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16bdd140-dce1-464c-ab47-dd5798d1d256" path="/var/lib/kubelet/pods/16bdd140-dce1-464c-ab47-dd5798d1d256/volumes" Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.986603 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18f80adb-c1c3-49ba-8ee4-932c851d3897" path="/var/lib/kubelet/pods/18f80adb-c1c3-49ba-8ee4-932c851d3897/volumes" Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.987764 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" path="/var/lib/kubelet/pods/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e/volumes" Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.988298 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2325ffef-9d5b-447f-b00e-3efc429acefe" path="/var/lib/kubelet/pods/2325ffef-9d5b-447f-b00e-3efc429acefe/volumes" Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.989465 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="301e1965-1754-483d-b6cc-bfae7038bbca" path="/var/lib/kubelet/pods/301e1965-1754-483d-b6cc-bfae7038bbca/volumes" Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.990398 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31fa8943-81cc-4750-a0b7-0fa9ab5af883" path="/var/lib/kubelet/pods/31fa8943-81cc-4750-a0b7-0fa9ab5af883/volumes" Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.991858 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42a11a02-47e1-488f-b270-2679d3298b0e" path="/var/lib/kubelet/pods/42a11a02-47e1-488f-b270-2679d3298b0e/volumes" Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.992548 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="567683bd-0efc-4f21-b076-e28559628404" path="/var/lib/kubelet/pods/567683bd-0efc-4f21-b076-e28559628404/volumes" Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.994023 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="584e1f4a-8205-47d7-8efb-3afc6017c4c9" path="/var/lib/kubelet/pods/584e1f4a-8205-47d7-8efb-3afc6017c4c9/volumes" Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.994490 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="593a3561-7760-45c5-8f91-5aaef7475d0f" path="/var/lib/kubelet/pods/593a3561-7760-45c5-8f91-5aaef7475d0f/volumes" Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.995147 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ebfebf6-3ecd-458e-943f-bb25b52e2718" path="/var/lib/kubelet/pods/5ebfebf6-3ecd-458e-943f-bb25b52e2718/volumes" Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.996217 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6077b63e-53a2-4f96-9d56-1ce0324e4913" path="/var/lib/kubelet/pods/6077b63e-53a2-4f96-9d56-1ce0324e4913/volumes" Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.997277 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" path="/var/lib/kubelet/pods/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca/volumes" Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.998389 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6edfcf45-925b-4eff-b940-95b6fc0b85d4" path="/var/lib/kubelet/pods/6edfcf45-925b-4eff-b940-95b6fc0b85d4/volumes" Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.999151 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ee8fbd3-1f81-4666-96da-5afc70819f1a" path="/var/lib/kubelet/pods/6ee8fbd3-1f81-4666-96da-5afc70819f1a/volumes" Jan 21 00:10:34 crc kubenswrapper[5118]: I0121 00:10:34.999877 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" path="/var/lib/kubelet/pods/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a/volumes" Jan 21 00:10:35 crc kubenswrapper[5118]: I0121 00:10:35.001818 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:35 crc kubenswrapper[5118]: I0121 00:10:35.001867 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:35 crc kubenswrapper[5118]: I0121 00:10:35.001879 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:35 crc kubenswrapper[5118]: I0121 00:10:35.001895 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:35 crc kubenswrapper[5118]: I0121 00:10:35.001907 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:35Z","lastTransitionTime":"2026-01-21T00:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:35 crc kubenswrapper[5118]: I0121 00:10:35.002087 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="736c54fe-349c-4bb9-870a-d1c1d1c03831" path="/var/lib/kubelet/pods/736c54fe-349c-4bb9-870a-d1c1d1c03831/volumes" Jan 21 00:10:35 crc kubenswrapper[5118]: I0121 00:10:35.003660 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7599e0b6-bddf-4def-b7f2-0b32206e8651" path="/var/lib/kubelet/pods/7599e0b6-bddf-4def-b7f2-0b32206e8651/volumes" Jan 21 00:10:35 crc kubenswrapper[5118]: I0121 00:10:35.005213 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7afa918d-be67-40a6-803c-d3b0ae99d815" path="/var/lib/kubelet/pods/7afa918d-be67-40a6-803c-d3b0ae99d815/volumes" Jan 21 00:10:35 crc kubenswrapper[5118]: I0121 00:10:35.006442 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7df94c10-441d-4386-93a6-6730fb7bcde0" path="/var/lib/kubelet/pods/7df94c10-441d-4386-93a6-6730fb7bcde0/volumes" Jan 21 00:10:35 crc kubenswrapper[5118]: I0121 00:10:35.008147 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" path="/var/lib/kubelet/pods/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a/volumes" Jan 21 00:10:35 crc kubenswrapper[5118]: I0121 00:10:35.009602 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81e39f7b-62e4-4fc9-992a-6535ce127a02" path="/var/lib/kubelet/pods/81e39f7b-62e4-4fc9-992a-6535ce127a02/volumes" Jan 21 00:10:35 crc kubenswrapper[5118]: I0121 00:10:35.011390 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="869851b9-7ffb-4af0-b166-1d8aa40a5f80" path="/var/lib/kubelet/pods/869851b9-7ffb-4af0-b166-1d8aa40a5f80/volumes" Jan 21 00:10:35 crc kubenswrapper[5118]: I0121 00:10:35.016016 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" path="/var/lib/kubelet/pods/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff/volumes" Jan 21 00:10:35 crc kubenswrapper[5118]: I0121 00:10:35.017389 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92dfbade-90b6-4169-8c07-72cff7f2c82b" path="/var/lib/kubelet/pods/92dfbade-90b6-4169-8c07-72cff7f2c82b/volumes" Jan 21 00:10:35 crc kubenswrapper[5118]: I0121 00:10:35.020565 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94a6e063-3d1a-4d44-875d-185291448c31" path="/var/lib/kubelet/pods/94a6e063-3d1a-4d44-875d-185291448c31/volumes" Jan 21 00:10:35 crc kubenswrapper[5118]: I0121 00:10:35.021549 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:35 crc kubenswrapper[5118]: I0121 00:10:35.023788 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f71a554-e414-4bc3-96d2-674060397afe" path="/var/lib/kubelet/pods/9f71a554-e414-4bc3-96d2-674060397afe/volumes" Jan 21 00:10:35 crc kubenswrapper[5118]: I0121 00:10:35.026053 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a208c9c2-333b-4b4a-be0d-bc32ec38a821" path="/var/lib/kubelet/pods/a208c9c2-333b-4b4a-be0d-bc32ec38a821/volumes" Jan 21 00:10:35 crc kubenswrapper[5118]: I0121 00:10:35.028719 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" path="/var/lib/kubelet/pods/a52afe44-fb37-46ed-a1f8-bf39727a3cbe/volumes" Jan 21 00:10:35 crc kubenswrapper[5118]: I0121 00:10:35.030537 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a555ff2e-0be6-46d5-897d-863bb92ae2b3" path="/var/lib/kubelet/pods/a555ff2e-0be6-46d5-897d-863bb92ae2b3/volumes" Jan 21 00:10:35 crc kubenswrapper[5118]: I0121 00:10:35.032745 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7a88189-c967-4640-879e-27665747f20c" path="/var/lib/kubelet/pods/a7a88189-c967-4640-879e-27665747f20c/volumes" Jan 21 00:10:35 crc kubenswrapper[5118]: I0121 00:10:35.034287 5118 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="af33e427-6803-48c2-a76a-dd9deb7cbf9a" path="/var/lib/kubelet/pods/af33e427-6803-48c2-a76a-dd9deb7cbf9a/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 21 00:10:35 crc kubenswrapper[5118]: I0121 00:10:35.034558 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af33e427-6803-48c2-a76a-dd9deb7cbf9a" path="/var/lib/kubelet/pods/af33e427-6803-48c2-a76a-dd9deb7cbf9a/volumes" Jan 21 00:10:35 crc kubenswrapper[5118]: I0121 00:10:35.040341 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af41de71-79cf-4590-bbe9-9e8b848862cb" path="/var/lib/kubelet/pods/af41de71-79cf-4590-bbe9-9e8b848862cb/volumes" Jan 21 00:10:35 crc kubenswrapper[5118]: I0121 00:10:35.041573 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" path="/var/lib/kubelet/pods/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a/volumes" Jan 21 00:10:35 crc kubenswrapper[5118]: I0121 00:10:35.042973 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4750666-1362-4001-abd0-6f89964cc621" path="/var/lib/kubelet/pods/b4750666-1362-4001-abd0-6f89964cc621/volumes" Jan 21 00:10:35 crc kubenswrapper[5118]: I0121 00:10:35.043780 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b605f283-6f2e-42da-a838-54421690f7d0" path="/var/lib/kubelet/pods/b605f283-6f2e-42da-a838-54421690f7d0/volumes" Jan 21 00:10:35 crc kubenswrapper[5118]: I0121 00:10:35.044292 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c491984c-7d4b-44aa-8c1e-d7974424fa47" path="/var/lib/kubelet/pods/c491984c-7d4b-44aa-8c1e-d7974424fa47/volumes" Jan 21 00:10:35 crc kubenswrapper[5118]: I0121 00:10:35.045609 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5f2bfad-70f6-4185-a3d9-81ce12720767" path="/var/lib/kubelet/pods/c5f2bfad-70f6-4185-a3d9-81ce12720767/volumes" Jan 21 00:10:35 crc kubenswrapper[5118]: I0121 00:10:35.047397 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc85e424-18b2-4924-920b-bd291a8c4b01" path="/var/lib/kubelet/pods/cc85e424-18b2-4924-920b-bd291a8c4b01/volumes" Jan 21 00:10:35 crc kubenswrapper[5118]: I0121 00:10:35.048170 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce090a97-9ab6-4c40-a719-64ff2acd9778" path="/var/lib/kubelet/pods/ce090a97-9ab6-4c40-a719-64ff2acd9778/volumes" Jan 21 00:10:35 crc kubenswrapper[5118]: I0121 00:10:35.051588 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d19cb085-0c5b-4810-b654-ce7923221d90" path="/var/lib/kubelet/pods/d19cb085-0c5b-4810-b654-ce7923221d90/volumes" Jan 21 00:10:35 crc kubenswrapper[5118]: I0121 00:10:35.052930 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" path="/var/lib/kubelet/pods/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7/volumes" Jan 21 00:10:35 crc kubenswrapper[5118]: I0121 00:10:35.054277 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d565531a-ff86-4608-9d19-767de01ac31b" path="/var/lib/kubelet/pods/d565531a-ff86-4608-9d19-767de01ac31b/volumes" Jan 21 00:10:35 crc kubenswrapper[5118]: I0121 00:10:35.055045 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7e8f42f-dc0e-424b-bb56-5ec849834888" path="/var/lib/kubelet/pods/d7e8f42f-dc0e-424b-bb56-5ec849834888/volumes" Jan 21 00:10:35 crc kubenswrapper[5118]: I0121 00:10:35.056081 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" path="/var/lib/kubelet/pods/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9/volumes" Jan 21 00:10:35 crc kubenswrapper[5118]: I0121 00:10:35.056724 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e093be35-bb62-4843-b2e8-094545761610" path="/var/lib/kubelet/pods/e093be35-bb62-4843-b2e8-094545761610/volumes" Jan 21 00:10:35 crc kubenswrapper[5118]: I0121 00:10:35.057490 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" path="/var/lib/kubelet/pods/e1d2a42d-af1d-4054-9618-ab545e0ed8b7/volumes" Jan 21 00:10:35 crc kubenswrapper[5118]: I0121 00:10:35.058715 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f559dfa3-3917-43a2-97f6-61ddfda10e93" path="/var/lib/kubelet/pods/f559dfa3-3917-43a2-97f6-61ddfda10e93/volumes" Jan 21 00:10:35 crc kubenswrapper[5118]: I0121 00:10:35.060176 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f65c0ac1-8bca-454d-a2e6-e35cb418beac" path="/var/lib/kubelet/pods/f65c0ac1-8bca-454d-a2e6-e35cb418beac/volumes" Jan 21 00:10:35 crc kubenswrapper[5118]: I0121 00:10:35.060248 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:35 crc kubenswrapper[5118]: I0121 00:10:35.060844 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" path="/var/lib/kubelet/pods/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4/volumes" Jan 21 00:10:35 crc kubenswrapper[5118]: I0121 00:10:35.062027 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7e2c886-118e-43bb-bef1-c78134de392b" path="/var/lib/kubelet/pods/f7e2c886-118e-43bb-bef1-c78134de392b/volumes" Jan 21 00:10:35 crc kubenswrapper[5118]: I0121 00:10:35.062776 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" path="/var/lib/kubelet/pods/fc8db2c7-859d-47b3-a900-2bd0c0b2973b/volumes" Jan 21 00:10:35 crc kubenswrapper[5118]: I0121 00:10:35.097768 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-qcqwq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c0390f5-26b4-4299-958c-acac058be619\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5t5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:10:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qcqwq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:35 crc kubenswrapper[5118]: I0121 00:10:35.104633 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:35 crc kubenswrapper[5118]: I0121 00:10:35.104673 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:35 crc kubenswrapper[5118]: I0121 00:10:35.104683 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:35 crc kubenswrapper[5118]: I0121 00:10:35.104699 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:35 crc kubenswrapper[5118]: I0121 00:10:35.104707 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:35Z","lastTransitionTime":"2026-01-21T00:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:35 crc kubenswrapper[5118]: I0121 00:10:35.134486 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff4c6b76-3326-4edc-a392-9edcaf197d60\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://69e28ae0052129054be6c0419161beea094bafc8c1cbcdcf5bf3436e7877d421\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b22458e11edc4b0d627c073d6b85af24b2b8e00bc13359490fd9ffac95677b1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b22458e11edc4b0d627c073d6b85af24b2b8e00bc13359490fd9ffac95677b1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:09:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:09:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:09:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:35 crc kubenswrapper[5118]: I0121 00:10:35.177738 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:35 crc kubenswrapper[5118]: I0121 00:10:35.206369 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:35 crc kubenswrapper[5118]: I0121 00:10:35.206410 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:35 crc kubenswrapper[5118]: I0121 00:10:35.206419 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:35 crc kubenswrapper[5118]: I0121 00:10:35.206435 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:35 crc kubenswrapper[5118]: I0121 00:10:35.206444 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:35Z","lastTransitionTime":"2026-01-21T00:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:35 crc kubenswrapper[5118]: I0121 00:10:35.215634 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:32Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:35 crc kubenswrapper[5118]: I0121 00:10:35.260146 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d4lsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0541bb33-5d4a-4ef9-964c-884c727499f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:10:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d4lsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:35 crc kubenswrapper[5118]: I0121 00:10:35.263118 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 00:10:35 crc kubenswrapper[5118]: E0121 00:10:35.263327 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 00:10:37.263290172 +0000 UTC m=+92.587537200 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:10:35 crc kubenswrapper[5118]: I0121 00:10:35.263372 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/21105fbf-0225-4ba6-ba90-17808d5250c6-metrics-certs\") pod \"network-metrics-daemon-9hvtf\" (UID: \"21105fbf-0225-4ba6-ba90-17808d5250c6\") " pod="openshift-multus/network-metrics-daemon-9hvtf" Jan 21 00:10:35 crc kubenswrapper[5118]: E0121 00:10:35.263640 5118 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 00:10:35 crc kubenswrapper[5118]: E0121 00:10:35.263972 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/21105fbf-0225-4ba6-ba90-17808d5250c6-metrics-certs podName:21105fbf-0225-4ba6-ba90-17808d5250c6 nodeName:}" failed. No retries permitted until 2026-01-21 00:10:37.263943939 +0000 UTC m=+92.588190977 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/21105fbf-0225-4ba6-ba90-17808d5250c6-metrics-certs") pod "network-metrics-daemon-9hvtf" (UID: "21105fbf-0225-4ba6-ba90-17808d5250c6") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 00:10:35 crc kubenswrapper[5118]: I0121 00:10:35.298987 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82b75e4d-eb03-4a0f-b349-9596c36b1f7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://fe7f2363564030a94c2b5aaa62bb4c132cd16d168cb151c5b3b9447b2e1be495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e4ebe604136e4069bea4c85be28e37c74101654b01f8f97f9dd6c5e6d2089369\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://958df862a38bf6ffb6e4b273a5f651ea46a717132f869a0fffedeb2ed05ad6a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4192feefd5cfa0d044d233324b963c0d52fc469217c4faad785dda349a30a38b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4192feefd5cfa0d044d233324b963c0d52fc469217c4faad785dda349a30a38b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T00:10:07Z\\\",\\\"message\\\":\\\"172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nW0121 00:10:06.988883 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 00:10:06.989033 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0121 00:10:06.989980 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1710353325/tls.crt::/tmp/serving-cert-1710353325/tls.key\\\\\\\"\\\\nI0121 00:10:07.300917 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 00:10:07.302784 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 00:10:07.302805 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 00:10:07.302832 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 00:10:07.302838 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 00:10:07.306381 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 00:10:07.306408 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 00:10:07.306414 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 00:10:07.306419 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 00:10:07.306422 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 00:10:07.306426 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 00:10:07.306429 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 00:10:07.306560 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 00:10:07.307535 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T00:10:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5420e1997e1749ea320776a71a52444ce92ddbd5ca9427f3c9efe5b951c27370\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://fe33373ea281d91b409c5cb5d649102effdff8d81e629af326cf6ec982abd031\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe33373ea281d91b409c5cb5d649102effdff8d81e629af326cf6ec982abd031\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:09:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:09:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:09:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:35 crc kubenswrapper[5118]: I0121 00:10:35.308329 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:35 crc kubenswrapper[5118]: I0121 00:10:35.308387 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:35 crc kubenswrapper[5118]: I0121 00:10:35.308399 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:35 crc kubenswrapper[5118]: I0121 00:10:35.308416 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:35 crc kubenswrapper[5118]: I0121 00:10:35.308429 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:35Z","lastTransitionTime":"2026-01-21T00:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:35 crc kubenswrapper[5118]: I0121 00:10:35.337107 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:32Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:35 crc kubenswrapper[5118]: I0121 00:10:35.377116 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:35 crc kubenswrapper[5118]: I0121 00:10:35.410790 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:35 crc kubenswrapper[5118]: I0121 00:10:35.410845 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:35 crc kubenswrapper[5118]: I0121 00:10:35.410858 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:35 crc kubenswrapper[5118]: I0121 00:10:35.410879 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:35 crc kubenswrapper[5118]: I0121 00:10:35.410894 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:35Z","lastTransitionTime":"2026-01-21T00:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:35 crc kubenswrapper[5118]: I0121 00:10:35.420618 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h8fs2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91e46657-55ca-43e7-9a43-6bb875c7debf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfh6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfh6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfh6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfh6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfh6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfh6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfh6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfh6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfh6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:10:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h8fs2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:35 crc kubenswrapper[5118]: I0121 00:10:35.454352 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9sftt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50c45c57-9291-48d3-8022-00a314541104\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5fh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:10:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9sftt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:35 crc kubenswrapper[5118]: I0121 00:10:35.499615 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96ad38d8-0631-494b-8a0c-73936655173c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://7da13cdac196a74d6f3d3fe06fd8b8f1b93152d831e98ee1b66f4bd30f77756b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://f71ef5ad6b3cecbe91cf0fa1e4e8759ddda878222a1c71e9801313336e424458\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://531e890ac624829dfeab5674374a20bf8f80e96fe3ad6baff6532501d078f297\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://93e48b61d0a2e616f65259ffbca42d9d000600a9f57c456e9fafc249cbbfa187\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:09:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:35 crc kubenswrapper[5118]: I0121 00:10:35.513251 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:35 crc kubenswrapper[5118]: I0121 00:10:35.513301 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:35 crc kubenswrapper[5118]: I0121 00:10:35.513311 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:35 crc kubenswrapper[5118]: I0121 00:10:35.513325 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:35 crc kubenswrapper[5118]: I0121 00:10:35.513336 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:35Z","lastTransitionTime":"2026-01-21T00:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:35 crc kubenswrapper[5118]: I0121 00:10:35.538876 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea90c3b6-90f2-4468-8987-cbc4691535cc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://01a6d01cbabb92bffcca05eb808b4bd0bee991f66f129422707d982e4e3d320f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://37fdbdbec8b545e1b3921af5413cad07f8ffa20745589533bc0fffa6ec9a42fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://0f3214d25bbdd49a8a29ce6f30a600024d862102e53bee5c64ac3f0880d97481\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d4b43bf3081fdf5d3820618be5c2c1f9db8f1b4b344beee6c6fb8f37064a0308\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4b43bf3081fdf5d3820618be5c2c1f9db8f1b4b344beee6c6fb8f37064a0308\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:09:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:09:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:09:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:35 crc kubenswrapper[5118]: I0121 00:10:35.578825 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:35 crc kubenswrapper[5118]: I0121 00:10:35.615814 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:35 crc kubenswrapper[5118]: I0121 00:10:35.615881 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:35 crc kubenswrapper[5118]: I0121 00:10:35.615897 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:35 crc kubenswrapper[5118]: I0121 00:10:35.615914 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:35 crc kubenswrapper[5118]: I0121 00:10:35.615927 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:35Z","lastTransitionTime":"2026-01-21T00:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:35 crc kubenswrapper[5118]: I0121 00:10:35.619365 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:35 crc kubenswrapper[5118]: I0121 00:10:35.667616 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-qcqwq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c0390f5-26b4-4299-958c-acac058be619\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5t5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:10:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qcqwq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:35 crc kubenswrapper[5118]: I0121 00:10:35.705074 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8e2c6b8-bbac-4c8c-98aa-eed95855d358\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://9d9f0111a3537cc924a7e201bcd1e6a41bc82e79b86ec8f1d33560c518239fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://9c43bace9e1ec4b78fc3886b886cfc9eb9505e5cd415b54a393092a5fb6bfede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://efa0534fa57e4334809de905bc9c6076a74ca99b2829d2716055befea0eb99ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://898860c9529a12085df4c5531acb1bd4f2bf2dc8acc40c795bef9e642ab80c73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://cde9ebfec14b67069eee7df51b0b8e257d4b7ccb5fc744f7cf08722b62167f08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4d69bc807ea39c1b6e472b26c0ed0618f1da285bdc9fd839f01a7c311aa34ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d69bc807ea39c1b6e472b26c0ed0618f1da285bdc9fd839f01a7c311aa34ef4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:09:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:09:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://afa5b4348d36675413cb2b023cbf9c78e23c0ee25d1298fecd6ef727c36d64ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa5b4348d36675413cb2b023cbf9c78e23c0ee25d1298fecd6ef727c36d64ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:09:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:09:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://b755b446899236ed8c24e77229b7d2ca8cec62643e7c60e7d77fd8a26bdea1ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b755b446899236ed8c24e77229b7d2ca8cec62643e7c60e7d77fd8a26bdea1ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:09:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:09:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:09:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:35 crc kubenswrapper[5118]: I0121 00:10:35.718225 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:35 crc kubenswrapper[5118]: I0121 00:10:35.718297 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:35 crc kubenswrapper[5118]: I0121 00:10:35.718309 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:35 crc kubenswrapper[5118]: I0121 00:10:35.718325 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:35 crc kubenswrapper[5118]: I0121 00:10:35.718338 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:35Z","lastTransitionTime":"2026-01-21T00:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:35 crc kubenswrapper[5118]: I0121 00:10:35.736473 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44eb9bc7-60a3-421c-bf5e-d1d9a5026435\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxsjl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxsjl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:10:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-22r9n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:35 crc kubenswrapper[5118]: I0121 00:10:35.775450 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9hvtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21105fbf-0225-4ba6-ba90-17808d5250c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjsv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjsv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:10:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9hvtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:35 crc kubenswrapper[5118]: I0121 00:10:35.816278 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kzdr6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddc3c284-5d85-4e40-b285-f16062ad8d9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzdws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzdws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:10:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-kzdr6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:35 crc kubenswrapper[5118]: I0121 00:10:35.820756 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:35 crc kubenswrapper[5118]: I0121 00:10:35.820792 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:35 crc kubenswrapper[5118]: I0121 00:10:35.820805 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:35 crc kubenswrapper[5118]: I0121 00:10:35.820822 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:35 crc kubenswrapper[5118]: I0121 00:10:35.820834 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:35Z","lastTransitionTime":"2026-01-21T00:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:35 crc kubenswrapper[5118]: I0121 00:10:35.857667 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-znhzw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acee46d0-3d60-4d08-abbd-b3df00872f90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-962nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:10:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-znhzw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:35 crc kubenswrapper[5118]: I0121 00:10:35.922681 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:35 crc kubenswrapper[5118]: I0121 00:10:35.922729 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:35 crc kubenswrapper[5118]: I0121 00:10:35.922741 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:35 crc kubenswrapper[5118]: I0121 00:10:35.922757 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:35 crc kubenswrapper[5118]: I0121 00:10:35.922770 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:35Z","lastTransitionTime":"2026-01-21T00:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:36 crc kubenswrapper[5118]: I0121 00:10:36.025005 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:36 crc kubenswrapper[5118]: I0121 00:10:36.025041 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:36 crc kubenswrapper[5118]: I0121 00:10:36.025049 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:36 crc kubenswrapper[5118]: I0121 00:10:36.025063 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:36 crc kubenswrapper[5118]: I0121 00:10:36.025074 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:36Z","lastTransitionTime":"2026-01-21T00:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:36 crc kubenswrapper[5118]: I0121 00:10:36.127099 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:36 crc kubenswrapper[5118]: I0121 00:10:36.127137 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:36 crc kubenswrapper[5118]: I0121 00:10:36.127146 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:36 crc kubenswrapper[5118]: I0121 00:10:36.127179 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:36 crc kubenswrapper[5118]: I0121 00:10:36.127188 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:36Z","lastTransitionTime":"2026-01-21T00:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:36 crc kubenswrapper[5118]: I0121 00:10:36.229654 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:36 crc kubenswrapper[5118]: I0121 00:10:36.229908 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:36 crc kubenswrapper[5118]: I0121 00:10:36.229984 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:36 crc kubenswrapper[5118]: I0121 00:10:36.230058 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:36 crc kubenswrapper[5118]: I0121 00:10:36.230129 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:36Z","lastTransitionTime":"2026-01-21T00:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:36 crc kubenswrapper[5118]: I0121 00:10:36.333833 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:36 crc kubenswrapper[5118]: I0121 00:10:36.333916 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:36 crc kubenswrapper[5118]: I0121 00:10:36.333941 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:36 crc kubenswrapper[5118]: I0121 00:10:36.333973 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:36 crc kubenswrapper[5118]: I0121 00:10:36.333998 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:36Z","lastTransitionTime":"2026-01-21T00:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:36 crc kubenswrapper[5118]: I0121 00:10:36.436538 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:36 crc kubenswrapper[5118]: I0121 00:10:36.436621 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:36 crc kubenswrapper[5118]: I0121 00:10:36.436646 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:36 crc kubenswrapper[5118]: I0121 00:10:36.436676 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:36 crc kubenswrapper[5118]: I0121 00:10:36.436698 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:36Z","lastTransitionTime":"2026-01-21T00:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:36 crc kubenswrapper[5118]: I0121 00:10:36.539729 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:36 crc kubenswrapper[5118]: I0121 00:10:36.539810 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:36 crc kubenswrapper[5118]: I0121 00:10:36.539834 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:36 crc kubenswrapper[5118]: I0121 00:10:36.539863 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:36 crc kubenswrapper[5118]: I0121 00:10:36.539889 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:36Z","lastTransitionTime":"2026-01-21T00:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:36 crc kubenswrapper[5118]: I0121 00:10:36.578686 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 00:10:36 crc kubenswrapper[5118]: I0121 00:10:36.578829 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 00:10:36 crc kubenswrapper[5118]: E0121 00:10:36.578843 5118 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 00:10:36 crc kubenswrapper[5118]: E0121 00:10:36.578913 5118 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 00:10:36 crc kubenswrapper[5118]: E0121 00:10:36.578979 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-21 00:10:40.578955353 +0000 UTC m=+95.903202401 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 00:10:36 crc kubenswrapper[5118]: E0121 00:10:36.579003 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-21 00:10:40.578991634 +0000 UTC m=+95.903238682 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 00:10:36 crc kubenswrapper[5118]: I0121 00:10:36.641845 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:36 crc kubenswrapper[5118]: I0121 00:10:36.641893 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:36 crc kubenswrapper[5118]: I0121 00:10:36.641906 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:36 crc kubenswrapper[5118]: I0121 00:10:36.641923 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:36 crc kubenswrapper[5118]: I0121 00:10:36.641935 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:36Z","lastTransitionTime":"2026-01-21T00:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:36 crc kubenswrapper[5118]: I0121 00:10:36.679852 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 21 00:10:36 crc kubenswrapper[5118]: I0121 00:10:36.679905 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 21 00:10:36 crc kubenswrapper[5118]: E0121 00:10:36.680144 5118 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 00:10:36 crc kubenswrapper[5118]: E0121 00:10:36.680217 5118 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 00:10:36 crc kubenswrapper[5118]: E0121 00:10:36.680239 5118 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 00:10:36 crc kubenswrapper[5118]: E0121 00:10:36.680381 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-21 00:10:40.680344705 +0000 UTC m=+96.004591753 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 00:10:36 crc kubenswrapper[5118]: E0121 00:10:36.680469 5118 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 00:10:36 crc kubenswrapper[5118]: E0121 00:10:36.680493 5118 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 00:10:36 crc kubenswrapper[5118]: E0121 00:10:36.680508 5118 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 00:10:36 crc kubenswrapper[5118]: E0121 00:10:36.680576 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-21 00:10:40.680559441 +0000 UTC m=+96.004806469 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 00:10:36 crc kubenswrapper[5118]: I0121 00:10:36.745119 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:36 crc kubenswrapper[5118]: I0121 00:10:36.745250 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:36 crc kubenswrapper[5118]: I0121 00:10:36.745310 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:36 crc kubenswrapper[5118]: I0121 00:10:36.745343 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:36 crc kubenswrapper[5118]: I0121 00:10:36.745366 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:36Z","lastTransitionTime":"2026-01-21T00:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:36 crc kubenswrapper[5118]: I0121 00:10:36.847934 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:36 crc kubenswrapper[5118]: I0121 00:10:36.847974 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:36 crc kubenswrapper[5118]: I0121 00:10:36.847984 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:36 crc kubenswrapper[5118]: I0121 00:10:36.847998 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:36 crc kubenswrapper[5118]: I0121 00:10:36.848008 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:36Z","lastTransitionTime":"2026-01-21T00:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:36 crc kubenswrapper[5118]: I0121 00:10:36.950254 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:36 crc kubenswrapper[5118]: I0121 00:10:36.950305 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:36 crc kubenswrapper[5118]: I0121 00:10:36.950318 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:36 crc kubenswrapper[5118]: I0121 00:10:36.950338 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:36 crc kubenswrapper[5118]: I0121 00:10:36.950352 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:36Z","lastTransitionTime":"2026-01-21T00:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:36 crc kubenswrapper[5118]: I0121 00:10:36.975419 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9hvtf" Jan 21 00:10:36 crc kubenswrapper[5118]: E0121 00:10:36.975573 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9hvtf" podUID="21105fbf-0225-4ba6-ba90-17808d5250c6" Jan 21 00:10:36 crc kubenswrapper[5118]: I0121 00:10:36.975583 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 21 00:10:36 crc kubenswrapper[5118]: E0121 00:10:36.975733 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 21 00:10:36 crc kubenswrapper[5118]: I0121 00:10:36.975926 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 00:10:36 crc kubenswrapper[5118]: E0121 00:10:36.976047 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 21 00:10:36 crc kubenswrapper[5118]: I0121 00:10:36.975854 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 21 00:10:36 crc kubenswrapper[5118]: E0121 00:10:36.976293 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 21 00:10:37 crc kubenswrapper[5118]: I0121 00:10:37.052582 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:37 crc kubenswrapper[5118]: I0121 00:10:37.052626 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:37 crc kubenswrapper[5118]: I0121 00:10:37.052638 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:37 crc kubenswrapper[5118]: I0121 00:10:37.052655 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:37 crc kubenswrapper[5118]: I0121 00:10:37.052681 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:37Z","lastTransitionTime":"2026-01-21T00:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:37 crc kubenswrapper[5118]: I0121 00:10:37.155453 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:37 crc kubenswrapper[5118]: I0121 00:10:37.155524 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:37 crc kubenswrapper[5118]: I0121 00:10:37.155542 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:37 crc kubenswrapper[5118]: I0121 00:10:37.155559 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:37 crc kubenswrapper[5118]: I0121 00:10:37.155571 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:37Z","lastTransitionTime":"2026-01-21T00:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:37 crc kubenswrapper[5118]: I0121 00:10:37.258028 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:37 crc kubenswrapper[5118]: I0121 00:10:37.258114 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:37 crc kubenswrapper[5118]: I0121 00:10:37.258138 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:37 crc kubenswrapper[5118]: I0121 00:10:37.258203 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:37 crc kubenswrapper[5118]: I0121 00:10:37.258231 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:37Z","lastTransitionTime":"2026-01-21T00:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:37 crc kubenswrapper[5118]: I0121 00:10:37.285785 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 00:10:37 crc kubenswrapper[5118]: E0121 00:10:37.285976 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 00:10:41.285941784 +0000 UTC m=+96.610188832 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:10:37 crc kubenswrapper[5118]: I0121 00:10:37.286303 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/21105fbf-0225-4ba6-ba90-17808d5250c6-metrics-certs\") pod \"network-metrics-daemon-9hvtf\" (UID: \"21105fbf-0225-4ba6-ba90-17808d5250c6\") " pod="openshift-multus/network-metrics-daemon-9hvtf" Jan 21 00:10:37 crc kubenswrapper[5118]: E0121 00:10:37.286485 5118 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 00:10:37 crc kubenswrapper[5118]: E0121 00:10:37.286561 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/21105fbf-0225-4ba6-ba90-17808d5250c6-metrics-certs podName:21105fbf-0225-4ba6-ba90-17808d5250c6 nodeName:}" failed. No retries permitted until 2026-01-21 00:10:41.28654466 +0000 UTC m=+96.610791718 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/21105fbf-0225-4ba6-ba90-17808d5250c6-metrics-certs") pod "network-metrics-daemon-9hvtf" (UID: "21105fbf-0225-4ba6-ba90-17808d5250c6") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 00:10:37 crc kubenswrapper[5118]: I0121 00:10:37.360289 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:37 crc kubenswrapper[5118]: I0121 00:10:37.360359 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:37 crc kubenswrapper[5118]: I0121 00:10:37.360385 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:37 crc kubenswrapper[5118]: I0121 00:10:37.360418 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:37 crc kubenswrapper[5118]: I0121 00:10:37.360441 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:37Z","lastTransitionTime":"2026-01-21T00:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:37 crc kubenswrapper[5118]: I0121 00:10:37.463423 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:37 crc kubenswrapper[5118]: I0121 00:10:37.463491 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:37 crc kubenswrapper[5118]: I0121 00:10:37.463511 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:37 crc kubenswrapper[5118]: I0121 00:10:37.463527 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:37 crc kubenswrapper[5118]: I0121 00:10:37.463540 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:37Z","lastTransitionTime":"2026-01-21T00:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:37 crc kubenswrapper[5118]: I0121 00:10:37.565371 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:37 crc kubenswrapper[5118]: I0121 00:10:37.565409 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:37 crc kubenswrapper[5118]: I0121 00:10:37.565417 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:37 crc kubenswrapper[5118]: I0121 00:10:37.565429 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:37 crc kubenswrapper[5118]: I0121 00:10:37.565437 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:37Z","lastTransitionTime":"2026-01-21T00:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:37 crc kubenswrapper[5118]: I0121 00:10:37.667313 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:37 crc kubenswrapper[5118]: I0121 00:10:37.667381 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:37 crc kubenswrapper[5118]: I0121 00:10:37.667396 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:37 crc kubenswrapper[5118]: I0121 00:10:37.667412 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:37 crc kubenswrapper[5118]: I0121 00:10:37.667423 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:37Z","lastTransitionTime":"2026-01-21T00:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:37 crc kubenswrapper[5118]: I0121 00:10:37.769547 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:37 crc kubenswrapper[5118]: I0121 00:10:37.769590 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:37 crc kubenswrapper[5118]: I0121 00:10:37.769603 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:37 crc kubenswrapper[5118]: I0121 00:10:37.769619 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:37 crc kubenswrapper[5118]: I0121 00:10:37.769631 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:37Z","lastTransitionTime":"2026-01-21T00:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:37 crc kubenswrapper[5118]: I0121 00:10:37.871937 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:37 crc kubenswrapper[5118]: I0121 00:10:37.871974 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:37 crc kubenswrapper[5118]: I0121 00:10:37.871982 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:37 crc kubenswrapper[5118]: I0121 00:10:37.871996 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:37 crc kubenswrapper[5118]: I0121 00:10:37.872007 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:37Z","lastTransitionTime":"2026-01-21T00:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:37 crc kubenswrapper[5118]: I0121 00:10:37.974016 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:37 crc kubenswrapper[5118]: I0121 00:10:37.974046 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:37 crc kubenswrapper[5118]: I0121 00:10:37.974054 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:37 crc kubenswrapper[5118]: I0121 00:10:37.974066 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:37 crc kubenswrapper[5118]: I0121 00:10:37.974076 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:37Z","lastTransitionTime":"2026-01-21T00:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:38 crc kubenswrapper[5118]: I0121 00:10:38.076293 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:38 crc kubenswrapper[5118]: I0121 00:10:38.076342 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:38 crc kubenswrapper[5118]: I0121 00:10:38.076356 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:38 crc kubenswrapper[5118]: I0121 00:10:38.076379 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:38 crc kubenswrapper[5118]: I0121 00:10:38.076394 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:38Z","lastTransitionTime":"2026-01-21T00:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:38 crc kubenswrapper[5118]: I0121 00:10:38.178679 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:38 crc kubenswrapper[5118]: I0121 00:10:38.178735 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:38 crc kubenswrapper[5118]: I0121 00:10:38.178750 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:38 crc kubenswrapper[5118]: I0121 00:10:38.178771 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:38 crc kubenswrapper[5118]: I0121 00:10:38.178789 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:38Z","lastTransitionTime":"2026-01-21T00:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:38 crc kubenswrapper[5118]: I0121 00:10:38.237970 5118 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Jan 21 00:10:38 crc kubenswrapper[5118]: I0121 00:10:38.281747 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:38 crc kubenswrapper[5118]: I0121 00:10:38.281826 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:38 crc kubenswrapper[5118]: I0121 00:10:38.281842 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:38 crc kubenswrapper[5118]: I0121 00:10:38.281868 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:38 crc kubenswrapper[5118]: I0121 00:10:38.281888 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:38Z","lastTransitionTime":"2026-01-21T00:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:38 crc kubenswrapper[5118]: I0121 00:10:38.384911 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:38 crc kubenswrapper[5118]: I0121 00:10:38.385005 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:38 crc kubenswrapper[5118]: I0121 00:10:38.385019 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:38 crc kubenswrapper[5118]: I0121 00:10:38.385038 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:38 crc kubenswrapper[5118]: I0121 00:10:38.385049 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:38Z","lastTransitionTime":"2026-01-21T00:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:38 crc kubenswrapper[5118]: I0121 00:10:38.488128 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:38 crc kubenswrapper[5118]: I0121 00:10:38.488243 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:38 crc kubenswrapper[5118]: I0121 00:10:38.488259 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:38 crc kubenswrapper[5118]: I0121 00:10:38.488283 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:38 crc kubenswrapper[5118]: I0121 00:10:38.488300 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:38Z","lastTransitionTime":"2026-01-21T00:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:38 crc kubenswrapper[5118]: I0121 00:10:38.590245 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:38 crc kubenswrapper[5118]: I0121 00:10:38.590290 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:38 crc kubenswrapper[5118]: I0121 00:10:38.590304 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:38 crc kubenswrapper[5118]: I0121 00:10:38.590320 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:38 crc kubenswrapper[5118]: I0121 00:10:38.590331 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:38Z","lastTransitionTime":"2026-01-21T00:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:38 crc kubenswrapper[5118]: I0121 00:10:38.693250 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:38 crc kubenswrapper[5118]: I0121 00:10:38.693306 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:38 crc kubenswrapper[5118]: I0121 00:10:38.693317 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:38 crc kubenswrapper[5118]: I0121 00:10:38.693336 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:38 crc kubenswrapper[5118]: I0121 00:10:38.693349 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:38Z","lastTransitionTime":"2026-01-21T00:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:38 crc kubenswrapper[5118]: I0121 00:10:38.796317 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:38 crc kubenswrapper[5118]: I0121 00:10:38.796772 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:38 crc kubenswrapper[5118]: I0121 00:10:38.796861 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:38 crc kubenswrapper[5118]: I0121 00:10:38.796973 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:38 crc kubenswrapper[5118]: I0121 00:10:38.797075 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:38Z","lastTransitionTime":"2026-01-21T00:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:38 crc kubenswrapper[5118]: I0121 00:10:38.899201 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:38 crc kubenswrapper[5118]: I0121 00:10:38.899274 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:38 crc kubenswrapper[5118]: I0121 00:10:38.899284 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:38 crc kubenswrapper[5118]: I0121 00:10:38.899298 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:38 crc kubenswrapper[5118]: I0121 00:10:38.899308 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:38Z","lastTransitionTime":"2026-01-21T00:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:38 crc kubenswrapper[5118]: I0121 00:10:38.975707 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 21 00:10:38 crc kubenswrapper[5118]: I0121 00:10:38.975759 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9hvtf" Jan 21 00:10:38 crc kubenswrapper[5118]: I0121 00:10:38.975847 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 00:10:38 crc kubenswrapper[5118]: E0121 00:10:38.975928 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9hvtf" podUID="21105fbf-0225-4ba6-ba90-17808d5250c6" Jan 21 00:10:38 crc kubenswrapper[5118]: E0121 00:10:38.975859 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 21 00:10:38 crc kubenswrapper[5118]: E0121 00:10:38.976070 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 21 00:10:38 crc kubenswrapper[5118]: I0121 00:10:38.976195 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 21 00:10:38 crc kubenswrapper[5118]: E0121 00:10:38.976358 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 21 00:10:39 crc kubenswrapper[5118]: I0121 00:10:39.001069 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:39 crc kubenswrapper[5118]: I0121 00:10:39.001107 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:39 crc kubenswrapper[5118]: I0121 00:10:39.001119 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:39 crc kubenswrapper[5118]: I0121 00:10:39.001135 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:39 crc kubenswrapper[5118]: I0121 00:10:39.001146 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:39Z","lastTransitionTime":"2026-01-21T00:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:39 crc kubenswrapper[5118]: I0121 00:10:39.102839 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:39 crc kubenswrapper[5118]: I0121 00:10:39.103064 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:39 crc kubenswrapper[5118]: I0121 00:10:39.103072 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:39 crc kubenswrapper[5118]: I0121 00:10:39.103085 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:39 crc kubenswrapper[5118]: I0121 00:10:39.103094 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:39Z","lastTransitionTime":"2026-01-21T00:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:39 crc kubenswrapper[5118]: I0121 00:10:39.205867 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:39 crc kubenswrapper[5118]: I0121 00:10:39.205906 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:39 crc kubenswrapper[5118]: I0121 00:10:39.205921 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:39 crc kubenswrapper[5118]: I0121 00:10:39.205935 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:39 crc kubenswrapper[5118]: I0121 00:10:39.205944 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:39Z","lastTransitionTime":"2026-01-21T00:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:39 crc kubenswrapper[5118]: I0121 00:10:39.308458 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:39 crc kubenswrapper[5118]: I0121 00:10:39.308501 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:39 crc kubenswrapper[5118]: I0121 00:10:39.308511 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:39 crc kubenswrapper[5118]: I0121 00:10:39.308526 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:39 crc kubenswrapper[5118]: I0121 00:10:39.308534 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:39Z","lastTransitionTime":"2026-01-21T00:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:39 crc kubenswrapper[5118]: I0121 00:10:39.410135 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:39 crc kubenswrapper[5118]: I0121 00:10:39.410456 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:39 crc kubenswrapper[5118]: I0121 00:10:39.410593 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:39 crc kubenswrapper[5118]: I0121 00:10:39.410737 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:39 crc kubenswrapper[5118]: I0121 00:10:39.410870 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:39Z","lastTransitionTime":"2026-01-21T00:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:39 crc kubenswrapper[5118]: I0121 00:10:39.513901 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:39 crc kubenswrapper[5118]: I0121 00:10:39.513946 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:39 crc kubenswrapper[5118]: I0121 00:10:39.513958 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:39 crc kubenswrapper[5118]: I0121 00:10:39.513975 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:39 crc kubenswrapper[5118]: I0121 00:10:39.513986 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:39Z","lastTransitionTime":"2026-01-21T00:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:39 crc kubenswrapper[5118]: I0121 00:10:39.615973 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:39 crc kubenswrapper[5118]: I0121 00:10:39.616020 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:39 crc kubenswrapper[5118]: I0121 00:10:39.616035 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:39 crc kubenswrapper[5118]: I0121 00:10:39.616052 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:39 crc kubenswrapper[5118]: I0121 00:10:39.616062 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:39Z","lastTransitionTime":"2026-01-21T00:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:39 crc kubenswrapper[5118]: I0121 00:10:39.717709 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:39 crc kubenswrapper[5118]: I0121 00:10:39.717739 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:39 crc kubenswrapper[5118]: I0121 00:10:39.717748 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:39 crc kubenswrapper[5118]: I0121 00:10:39.717760 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:39 crc kubenswrapper[5118]: I0121 00:10:39.717768 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:39Z","lastTransitionTime":"2026-01-21T00:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:39 crc kubenswrapper[5118]: I0121 00:10:39.820877 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:39 crc kubenswrapper[5118]: I0121 00:10:39.820942 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:39 crc kubenswrapper[5118]: I0121 00:10:39.820954 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:39 crc kubenswrapper[5118]: I0121 00:10:39.820971 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:39 crc kubenswrapper[5118]: I0121 00:10:39.820991 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:39Z","lastTransitionTime":"2026-01-21T00:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:39 crc kubenswrapper[5118]: I0121 00:10:39.923280 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:39 crc kubenswrapper[5118]: I0121 00:10:39.923326 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:39 crc kubenswrapper[5118]: I0121 00:10:39.923337 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:39 crc kubenswrapper[5118]: I0121 00:10:39.923351 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:39 crc kubenswrapper[5118]: I0121 00:10:39.923361 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:39Z","lastTransitionTime":"2026-01-21T00:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:40 crc kubenswrapper[5118]: I0121 00:10:40.025211 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:40 crc kubenswrapper[5118]: I0121 00:10:40.025263 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:40 crc kubenswrapper[5118]: I0121 00:10:40.025292 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:40 crc kubenswrapper[5118]: I0121 00:10:40.025322 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:40 crc kubenswrapper[5118]: I0121 00:10:40.025339 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:40Z","lastTransitionTime":"2026-01-21T00:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:40 crc kubenswrapper[5118]: I0121 00:10:40.127559 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:40 crc kubenswrapper[5118]: I0121 00:10:40.127646 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:40 crc kubenswrapper[5118]: I0121 00:10:40.127681 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:40 crc kubenswrapper[5118]: I0121 00:10:40.127715 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:40 crc kubenswrapper[5118]: I0121 00:10:40.127740 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:40Z","lastTransitionTime":"2026-01-21T00:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:40 crc kubenswrapper[5118]: I0121 00:10:40.230028 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:40 crc kubenswrapper[5118]: I0121 00:10:40.230066 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:40 crc kubenswrapper[5118]: I0121 00:10:40.230076 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:40 crc kubenswrapper[5118]: I0121 00:10:40.230090 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:40 crc kubenswrapper[5118]: I0121 00:10:40.230099 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:40Z","lastTransitionTime":"2026-01-21T00:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:40 crc kubenswrapper[5118]: I0121 00:10:40.332302 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:40 crc kubenswrapper[5118]: I0121 00:10:40.332350 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:40 crc kubenswrapper[5118]: I0121 00:10:40.332365 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:40 crc kubenswrapper[5118]: I0121 00:10:40.332385 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:40 crc kubenswrapper[5118]: I0121 00:10:40.332399 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:40Z","lastTransitionTime":"2026-01-21T00:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:40 crc kubenswrapper[5118]: I0121 00:10:40.435679 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:40 crc kubenswrapper[5118]: I0121 00:10:40.435736 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:40 crc kubenswrapper[5118]: I0121 00:10:40.435754 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:40 crc kubenswrapper[5118]: I0121 00:10:40.435777 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:40 crc kubenswrapper[5118]: I0121 00:10:40.435793 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:40Z","lastTransitionTime":"2026-01-21T00:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:40 crc kubenswrapper[5118]: I0121 00:10:40.537761 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:40 crc kubenswrapper[5118]: I0121 00:10:40.537871 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:40 crc kubenswrapper[5118]: I0121 00:10:40.537899 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:40 crc kubenswrapper[5118]: I0121 00:10:40.537928 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:40 crc kubenswrapper[5118]: I0121 00:10:40.537946 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:40Z","lastTransitionTime":"2026-01-21T00:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:40 crc kubenswrapper[5118]: I0121 00:10:40.623326 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 00:10:40 crc kubenswrapper[5118]: I0121 00:10:40.623444 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 00:10:40 crc kubenswrapper[5118]: E0121 00:10:40.623595 5118 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 00:10:40 crc kubenswrapper[5118]: E0121 00:10:40.623624 5118 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 00:10:40 crc kubenswrapper[5118]: E0121 00:10:40.623693 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-21 00:10:48.623669352 +0000 UTC m=+103.947916390 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 00:10:40 crc kubenswrapper[5118]: E0121 00:10:40.623785 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-21 00:10:48.623755984 +0000 UTC m=+103.948003002 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 00:10:40 crc kubenswrapper[5118]: I0121 00:10:40.637317 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:40 crc kubenswrapper[5118]: I0121 00:10:40.637378 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:40 crc kubenswrapper[5118]: I0121 00:10:40.637396 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:40 crc kubenswrapper[5118]: I0121 00:10:40.637416 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:40 crc kubenswrapper[5118]: I0121 00:10:40.637432 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:40Z","lastTransitionTime":"2026-01-21T00:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:40 crc kubenswrapper[5118]: E0121 00:10:40.649108 5118 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400464Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861264Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:10:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:10:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:10:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:10:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"134a100e-afd8-41bd-8bdc-3d8d9cbfad99\\\",\\\"systemUUID\\\":\\\"78a64d73-f919-4466-a9b9-ec34ac96c5c7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:40 crc kubenswrapper[5118]: I0121 00:10:40.654084 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:40 crc kubenswrapper[5118]: I0121 00:10:40.654153 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:40 crc kubenswrapper[5118]: I0121 00:10:40.654214 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:40 crc kubenswrapper[5118]: I0121 00:10:40.654242 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:40 crc kubenswrapper[5118]: I0121 00:10:40.654264 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:40Z","lastTransitionTime":"2026-01-21T00:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:40 crc kubenswrapper[5118]: E0121 00:10:40.669775 5118 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400464Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861264Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:10:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:10:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:10:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:10:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"134a100e-afd8-41bd-8bdc-3d8d9cbfad99\\\",\\\"systemUUID\\\":\\\"78a64d73-f919-4466-a9b9-ec34ac96c5c7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:40 crc kubenswrapper[5118]: I0121 00:10:40.674130 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:40 crc kubenswrapper[5118]: I0121 00:10:40.674242 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:40 crc kubenswrapper[5118]: I0121 00:10:40.674264 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:40 crc kubenswrapper[5118]: I0121 00:10:40.674289 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:40 crc kubenswrapper[5118]: I0121 00:10:40.674306 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:40Z","lastTransitionTime":"2026-01-21T00:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:40 crc kubenswrapper[5118]: E0121 00:10:40.692905 5118 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400464Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861264Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:10:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:10:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:10:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:10:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"134a100e-afd8-41bd-8bdc-3d8d9cbfad99\\\",\\\"systemUUID\\\":\\\"78a64d73-f919-4466-a9b9-ec34ac96c5c7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:40 crc kubenswrapper[5118]: I0121 00:10:40.697589 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:40 crc kubenswrapper[5118]: I0121 00:10:40.697669 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:40 crc kubenswrapper[5118]: I0121 00:10:40.697685 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:40 crc kubenswrapper[5118]: I0121 00:10:40.697708 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:40 crc kubenswrapper[5118]: I0121 00:10:40.697721 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:40Z","lastTransitionTime":"2026-01-21T00:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:40 crc kubenswrapper[5118]: E0121 00:10:40.711842 5118 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400464Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861264Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:10:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:10:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:10:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:10:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"134a100e-afd8-41bd-8bdc-3d8d9cbfad99\\\",\\\"systemUUID\\\":\\\"78a64d73-f919-4466-a9b9-ec34ac96c5c7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:40 crc kubenswrapper[5118]: I0121 00:10:40.715461 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:40 crc kubenswrapper[5118]: I0121 00:10:40.715504 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:40 crc kubenswrapper[5118]: I0121 00:10:40.715515 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:40 crc kubenswrapper[5118]: I0121 00:10:40.715533 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:40 crc kubenswrapper[5118]: I0121 00:10:40.715545 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:40Z","lastTransitionTime":"2026-01-21T00:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:40 crc kubenswrapper[5118]: I0121 00:10:40.727791 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 21 00:10:40 crc kubenswrapper[5118]: I0121 00:10:40.727845 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 21 00:10:40 crc kubenswrapper[5118]: E0121 00:10:40.727971 5118 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 00:10:40 crc kubenswrapper[5118]: E0121 00:10:40.727989 5118 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 00:10:40 crc kubenswrapper[5118]: E0121 00:10:40.727984 5118 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 00:10:40 crc kubenswrapper[5118]: E0121 00:10:40.728084 5118 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 00:10:40 crc kubenswrapper[5118]: E0121 00:10:40.728001 5118 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 00:10:40 crc kubenswrapper[5118]: E0121 00:10:40.728106 5118 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 00:10:40 crc kubenswrapper[5118]: E0121 00:10:40.728188 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-21 00:10:48.728170836 +0000 UTC m=+104.052417854 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 00:10:40 crc kubenswrapper[5118]: E0121 00:10:40.728224 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-21 00:10:48.728195747 +0000 UTC m=+104.052442805 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 00:10:40 crc kubenswrapper[5118]: E0121 00:10:40.729307 5118 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400464Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861264Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:10:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:10:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:10:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T00:10:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"134a100e-afd8-41bd-8bdc-3d8d9cbfad99\\\",\\\"systemUUID\\\":\\\"78a64d73-f919-4466-a9b9-ec34ac96c5c7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:40 crc kubenswrapper[5118]: E0121 00:10:40.729462 5118 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Jan 21 00:10:40 crc kubenswrapper[5118]: I0121 00:10:40.730790 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:40 crc kubenswrapper[5118]: I0121 00:10:40.730817 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:40 crc kubenswrapper[5118]: I0121 00:10:40.730827 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:40 crc kubenswrapper[5118]: I0121 00:10:40.730841 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:40 crc kubenswrapper[5118]: I0121 00:10:40.730852 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:40Z","lastTransitionTime":"2026-01-21T00:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:40 crc kubenswrapper[5118]: I0121 00:10:40.832984 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:40 crc kubenswrapper[5118]: I0121 00:10:40.833055 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:40 crc kubenswrapper[5118]: I0121 00:10:40.833066 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:40 crc kubenswrapper[5118]: I0121 00:10:40.833081 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:40 crc kubenswrapper[5118]: I0121 00:10:40.833091 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:40Z","lastTransitionTime":"2026-01-21T00:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:40 crc kubenswrapper[5118]: I0121 00:10:40.935486 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:40 crc kubenswrapper[5118]: I0121 00:10:40.935578 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:40 crc kubenswrapper[5118]: I0121 00:10:40.935601 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:40 crc kubenswrapper[5118]: I0121 00:10:40.935629 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:40 crc kubenswrapper[5118]: I0121 00:10:40.935649 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:40Z","lastTransitionTime":"2026-01-21T00:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:40 crc kubenswrapper[5118]: I0121 00:10:40.981077 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 00:10:40 crc kubenswrapper[5118]: I0121 00:10:40.981335 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9hvtf" Jan 21 00:10:40 crc kubenswrapper[5118]: E0121 00:10:40.981488 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9hvtf" podUID="21105fbf-0225-4ba6-ba90-17808d5250c6" Jan 21 00:10:40 crc kubenswrapper[5118]: E0121 00:10:40.981332 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 21 00:10:40 crc kubenswrapper[5118]: I0121 00:10:40.981571 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 21 00:10:40 crc kubenswrapper[5118]: I0121 00:10:40.981598 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 21 00:10:40 crc kubenswrapper[5118]: E0121 00:10:40.981736 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 21 00:10:40 crc kubenswrapper[5118]: E0121 00:10:40.981908 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 21 00:10:41 crc kubenswrapper[5118]: I0121 00:10:41.038350 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:41 crc kubenswrapper[5118]: I0121 00:10:41.038422 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:41 crc kubenswrapper[5118]: I0121 00:10:41.038444 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:41 crc kubenswrapper[5118]: I0121 00:10:41.038470 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:41 crc kubenswrapper[5118]: I0121 00:10:41.038500 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:41Z","lastTransitionTime":"2026-01-21T00:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:41 crc kubenswrapper[5118]: I0121 00:10:41.141667 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:41 crc kubenswrapper[5118]: I0121 00:10:41.141706 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:41 crc kubenswrapper[5118]: I0121 00:10:41.141718 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:41 crc kubenswrapper[5118]: I0121 00:10:41.141733 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:41 crc kubenswrapper[5118]: I0121 00:10:41.141744 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:41Z","lastTransitionTime":"2026-01-21T00:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:41 crc kubenswrapper[5118]: I0121 00:10:41.244379 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:41 crc kubenswrapper[5118]: I0121 00:10:41.244430 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:41 crc kubenswrapper[5118]: I0121 00:10:41.244441 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:41 crc kubenswrapper[5118]: I0121 00:10:41.244465 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:41 crc kubenswrapper[5118]: I0121 00:10:41.244478 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:41Z","lastTransitionTime":"2026-01-21T00:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:41 crc kubenswrapper[5118]: I0121 00:10:41.333196 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 00:10:41 crc kubenswrapper[5118]: E0121 00:10:41.333341 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 00:10:49.333323033 +0000 UTC m=+104.657570051 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:10:41 crc kubenswrapper[5118]: I0121 00:10:41.333516 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/21105fbf-0225-4ba6-ba90-17808d5250c6-metrics-certs\") pod \"network-metrics-daemon-9hvtf\" (UID: \"21105fbf-0225-4ba6-ba90-17808d5250c6\") " pod="openshift-multus/network-metrics-daemon-9hvtf" Jan 21 00:10:41 crc kubenswrapper[5118]: E0121 00:10:41.333660 5118 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 00:10:41 crc kubenswrapper[5118]: E0121 00:10:41.333744 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/21105fbf-0225-4ba6-ba90-17808d5250c6-metrics-certs podName:21105fbf-0225-4ba6-ba90-17808d5250c6 nodeName:}" failed. No retries permitted until 2026-01-21 00:10:49.333727554 +0000 UTC m=+104.657974612 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/21105fbf-0225-4ba6-ba90-17808d5250c6-metrics-certs") pod "network-metrics-daemon-9hvtf" (UID: "21105fbf-0225-4ba6-ba90-17808d5250c6") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 00:10:41 crc kubenswrapper[5118]: I0121 00:10:41.347083 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:41 crc kubenswrapper[5118]: I0121 00:10:41.347199 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:41 crc kubenswrapper[5118]: I0121 00:10:41.347219 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:41 crc kubenswrapper[5118]: I0121 00:10:41.347240 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:41 crc kubenswrapper[5118]: I0121 00:10:41.347260 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:41Z","lastTransitionTime":"2026-01-21T00:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:41 crc kubenswrapper[5118]: I0121 00:10:41.449855 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:41 crc kubenswrapper[5118]: I0121 00:10:41.450209 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:41 crc kubenswrapper[5118]: I0121 00:10:41.450255 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:41 crc kubenswrapper[5118]: I0121 00:10:41.450288 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:41 crc kubenswrapper[5118]: I0121 00:10:41.450311 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:41Z","lastTransitionTime":"2026-01-21T00:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:41 crc kubenswrapper[5118]: I0121 00:10:41.552999 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:41 crc kubenswrapper[5118]: I0121 00:10:41.553040 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:41 crc kubenswrapper[5118]: I0121 00:10:41.553051 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:41 crc kubenswrapper[5118]: I0121 00:10:41.553066 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:41 crc kubenswrapper[5118]: I0121 00:10:41.553078 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:41Z","lastTransitionTime":"2026-01-21T00:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:41 crc kubenswrapper[5118]: I0121 00:10:41.655619 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:41 crc kubenswrapper[5118]: I0121 00:10:41.655665 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:41 crc kubenswrapper[5118]: I0121 00:10:41.655677 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:41 crc kubenswrapper[5118]: I0121 00:10:41.655695 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:41 crc kubenswrapper[5118]: I0121 00:10:41.655707 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:41Z","lastTransitionTime":"2026-01-21T00:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:41 crc kubenswrapper[5118]: I0121 00:10:41.757682 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:41 crc kubenswrapper[5118]: I0121 00:10:41.757746 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:41 crc kubenswrapper[5118]: I0121 00:10:41.757756 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:41 crc kubenswrapper[5118]: I0121 00:10:41.757770 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:41 crc kubenswrapper[5118]: I0121 00:10:41.757782 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:41Z","lastTransitionTime":"2026-01-21T00:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:41 crc kubenswrapper[5118]: I0121 00:10:41.860505 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:41 crc kubenswrapper[5118]: I0121 00:10:41.860584 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:41 crc kubenswrapper[5118]: I0121 00:10:41.860601 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:41 crc kubenswrapper[5118]: I0121 00:10:41.860621 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:41 crc kubenswrapper[5118]: I0121 00:10:41.860633 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:41Z","lastTransitionTime":"2026-01-21T00:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:41 crc kubenswrapper[5118]: I0121 00:10:41.962650 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:41 crc kubenswrapper[5118]: I0121 00:10:41.962693 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:41 crc kubenswrapper[5118]: I0121 00:10:41.962703 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:41 crc kubenswrapper[5118]: I0121 00:10:41.962718 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:41 crc kubenswrapper[5118]: I0121 00:10:41.962728 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:41Z","lastTransitionTime":"2026-01-21T00:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:42 crc kubenswrapper[5118]: I0121 00:10:42.064282 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:42 crc kubenswrapper[5118]: I0121 00:10:42.064321 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:42 crc kubenswrapper[5118]: I0121 00:10:42.064331 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:42 crc kubenswrapper[5118]: I0121 00:10:42.064345 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:42 crc kubenswrapper[5118]: I0121 00:10:42.064355 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:42Z","lastTransitionTime":"2026-01-21T00:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:42 crc kubenswrapper[5118]: I0121 00:10:42.166781 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:42 crc kubenswrapper[5118]: I0121 00:10:42.167071 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:42 crc kubenswrapper[5118]: I0121 00:10:42.167152 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:42 crc kubenswrapper[5118]: I0121 00:10:42.167272 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:42 crc kubenswrapper[5118]: I0121 00:10:42.167342 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:42Z","lastTransitionTime":"2026-01-21T00:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:42 crc kubenswrapper[5118]: I0121 00:10:42.270146 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:42 crc kubenswrapper[5118]: I0121 00:10:42.270236 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:42 crc kubenswrapper[5118]: I0121 00:10:42.270252 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:42 crc kubenswrapper[5118]: I0121 00:10:42.270277 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:42 crc kubenswrapper[5118]: I0121 00:10:42.270294 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:42Z","lastTransitionTime":"2026-01-21T00:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:42 crc kubenswrapper[5118]: I0121 00:10:42.373101 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:42 crc kubenswrapper[5118]: I0121 00:10:42.373149 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:42 crc kubenswrapper[5118]: I0121 00:10:42.373184 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:42 crc kubenswrapper[5118]: I0121 00:10:42.373201 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:42 crc kubenswrapper[5118]: I0121 00:10:42.373213 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:42Z","lastTransitionTime":"2026-01-21T00:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:42 crc kubenswrapper[5118]: I0121 00:10:42.475867 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:42 crc kubenswrapper[5118]: I0121 00:10:42.475929 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:42 crc kubenswrapper[5118]: I0121 00:10:42.475949 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:42 crc kubenswrapper[5118]: I0121 00:10:42.475972 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:42 crc kubenswrapper[5118]: I0121 00:10:42.475989 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:42Z","lastTransitionTime":"2026-01-21T00:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:42 crc kubenswrapper[5118]: I0121 00:10:42.578541 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:42 crc kubenswrapper[5118]: I0121 00:10:42.578592 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:42 crc kubenswrapper[5118]: I0121 00:10:42.578601 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:42 crc kubenswrapper[5118]: I0121 00:10:42.578618 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:42 crc kubenswrapper[5118]: I0121 00:10:42.578628 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:42Z","lastTransitionTime":"2026-01-21T00:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:42 crc kubenswrapper[5118]: I0121 00:10:42.682254 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:42 crc kubenswrapper[5118]: I0121 00:10:42.682313 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:42 crc kubenswrapper[5118]: I0121 00:10:42.682332 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:42 crc kubenswrapper[5118]: I0121 00:10:42.682356 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:42 crc kubenswrapper[5118]: I0121 00:10:42.682374 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:42Z","lastTransitionTime":"2026-01-21T00:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:42 crc kubenswrapper[5118]: I0121 00:10:42.784569 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:42 crc kubenswrapper[5118]: I0121 00:10:42.784644 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:42 crc kubenswrapper[5118]: I0121 00:10:42.784668 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:42 crc kubenswrapper[5118]: I0121 00:10:42.784697 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:42 crc kubenswrapper[5118]: I0121 00:10:42.784719 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:42Z","lastTransitionTime":"2026-01-21T00:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:42 crc kubenswrapper[5118]: I0121 00:10:42.791128 5118 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Jan 21 00:10:42 crc kubenswrapper[5118]: I0121 00:10:42.886724 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:42 crc kubenswrapper[5118]: I0121 00:10:42.886782 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:42 crc kubenswrapper[5118]: I0121 00:10:42.886795 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:42 crc kubenswrapper[5118]: I0121 00:10:42.886810 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:42 crc kubenswrapper[5118]: I0121 00:10:42.886821 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:42Z","lastTransitionTime":"2026-01-21T00:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:42 crc kubenswrapper[5118]: I0121 00:10:42.979677 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 00:10:42 crc kubenswrapper[5118]: E0121 00:10:42.979798 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 21 00:10:42 crc kubenswrapper[5118]: I0121 00:10:42.979878 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 21 00:10:42 crc kubenswrapper[5118]: I0121 00:10:42.980130 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9hvtf" Jan 21 00:10:42 crc kubenswrapper[5118]: I0121 00:10:42.980201 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 21 00:10:42 crc kubenswrapper[5118]: E0121 00:10:42.980200 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 21 00:10:42 crc kubenswrapper[5118]: E0121 00:10:42.980333 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9hvtf" podUID="21105fbf-0225-4ba6-ba90-17808d5250c6" Jan 21 00:10:42 crc kubenswrapper[5118]: E0121 00:10:42.980452 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 21 00:10:42 crc kubenswrapper[5118]: I0121 00:10:42.988701 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:42 crc kubenswrapper[5118]: I0121 00:10:42.988745 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:42 crc kubenswrapper[5118]: I0121 00:10:42.988761 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:42 crc kubenswrapper[5118]: I0121 00:10:42.988778 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:42 crc kubenswrapper[5118]: I0121 00:10:42.988790 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:42Z","lastTransitionTime":"2026-01-21T00:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:43 crc kubenswrapper[5118]: I0121 00:10:43.091452 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:43 crc kubenswrapper[5118]: I0121 00:10:43.091766 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:43 crc kubenswrapper[5118]: I0121 00:10:43.091855 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:43 crc kubenswrapper[5118]: I0121 00:10:43.091947 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:43 crc kubenswrapper[5118]: I0121 00:10:43.092025 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:43Z","lastTransitionTime":"2026-01-21T00:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:43 crc kubenswrapper[5118]: I0121 00:10:43.194247 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:43 crc kubenswrapper[5118]: I0121 00:10:43.194302 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:43 crc kubenswrapper[5118]: I0121 00:10:43.194315 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:43 crc kubenswrapper[5118]: I0121 00:10:43.194333 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:43 crc kubenswrapper[5118]: I0121 00:10:43.194347 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:43Z","lastTransitionTime":"2026-01-21T00:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:43 crc kubenswrapper[5118]: I0121 00:10:43.296684 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:43 crc kubenswrapper[5118]: I0121 00:10:43.296738 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:43 crc kubenswrapper[5118]: I0121 00:10:43.296750 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:43 crc kubenswrapper[5118]: I0121 00:10:43.296768 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:43 crc kubenswrapper[5118]: I0121 00:10:43.296781 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:43Z","lastTransitionTime":"2026-01-21T00:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:43 crc kubenswrapper[5118]: I0121 00:10:43.398952 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:43 crc kubenswrapper[5118]: I0121 00:10:43.398986 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:43 crc kubenswrapper[5118]: I0121 00:10:43.398999 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:43 crc kubenswrapper[5118]: I0121 00:10:43.399012 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:43 crc kubenswrapper[5118]: I0121 00:10:43.399020 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:43Z","lastTransitionTime":"2026-01-21T00:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:43 crc kubenswrapper[5118]: I0121 00:10:43.500805 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:43 crc kubenswrapper[5118]: I0121 00:10:43.500845 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:43 crc kubenswrapper[5118]: I0121 00:10:43.500854 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:43 crc kubenswrapper[5118]: I0121 00:10:43.500868 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:43 crc kubenswrapper[5118]: I0121 00:10:43.500878 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:43Z","lastTransitionTime":"2026-01-21T00:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:43 crc kubenswrapper[5118]: I0121 00:10:43.603259 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:43 crc kubenswrapper[5118]: I0121 00:10:43.603305 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:43 crc kubenswrapper[5118]: I0121 00:10:43.603318 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:43 crc kubenswrapper[5118]: I0121 00:10:43.603336 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:43 crc kubenswrapper[5118]: I0121 00:10:43.603348 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:43Z","lastTransitionTime":"2026-01-21T00:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:43 crc kubenswrapper[5118]: I0121 00:10:43.705423 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:43 crc kubenswrapper[5118]: I0121 00:10:43.705488 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:43 crc kubenswrapper[5118]: I0121 00:10:43.705506 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:43 crc kubenswrapper[5118]: I0121 00:10:43.705530 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:43 crc kubenswrapper[5118]: I0121 00:10:43.705544 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:43Z","lastTransitionTime":"2026-01-21T00:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:43 crc kubenswrapper[5118]: I0121 00:10:43.808067 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:43 crc kubenswrapper[5118]: I0121 00:10:43.808108 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:43 crc kubenswrapper[5118]: I0121 00:10:43.808120 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:43 crc kubenswrapper[5118]: I0121 00:10:43.808137 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:43 crc kubenswrapper[5118]: I0121 00:10:43.808151 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:43Z","lastTransitionTime":"2026-01-21T00:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:43 crc kubenswrapper[5118]: I0121 00:10:43.910341 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:43 crc kubenswrapper[5118]: I0121 00:10:43.910392 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:43 crc kubenswrapper[5118]: I0121 00:10:43.910410 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:43 crc kubenswrapper[5118]: I0121 00:10:43.910432 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:43 crc kubenswrapper[5118]: I0121 00:10:43.910449 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:43Z","lastTransitionTime":"2026-01-21T00:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:44 crc kubenswrapper[5118]: I0121 00:10:44.012303 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:44 crc kubenswrapper[5118]: I0121 00:10:44.012348 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:44 crc kubenswrapper[5118]: I0121 00:10:44.012361 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:44 crc kubenswrapper[5118]: I0121 00:10:44.012375 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:44 crc kubenswrapper[5118]: I0121 00:10:44.012385 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:44Z","lastTransitionTime":"2026-01-21T00:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:44 crc kubenswrapper[5118]: I0121 00:10:44.114591 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:44 crc kubenswrapper[5118]: I0121 00:10:44.114922 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:44 crc kubenswrapper[5118]: I0121 00:10:44.114939 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:44 crc kubenswrapper[5118]: I0121 00:10:44.114955 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:44 crc kubenswrapper[5118]: I0121 00:10:44.114965 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:44Z","lastTransitionTime":"2026-01-21T00:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:44 crc kubenswrapper[5118]: I0121 00:10:44.217421 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:44 crc kubenswrapper[5118]: I0121 00:10:44.217487 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:44 crc kubenswrapper[5118]: I0121 00:10:44.217507 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:44 crc kubenswrapper[5118]: I0121 00:10:44.217533 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:44 crc kubenswrapper[5118]: I0121 00:10:44.217551 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:44Z","lastTransitionTime":"2026-01-21T00:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:44 crc kubenswrapper[5118]: I0121 00:10:44.320151 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:44 crc kubenswrapper[5118]: I0121 00:10:44.320202 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:44 crc kubenswrapper[5118]: I0121 00:10:44.320211 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:44 crc kubenswrapper[5118]: I0121 00:10:44.320223 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:44 crc kubenswrapper[5118]: I0121 00:10:44.320231 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:44Z","lastTransitionTime":"2026-01-21T00:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:44 crc kubenswrapper[5118]: I0121 00:10:44.422311 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:44 crc kubenswrapper[5118]: I0121 00:10:44.422347 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:44 crc kubenswrapper[5118]: I0121 00:10:44.422356 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:44 crc kubenswrapper[5118]: I0121 00:10:44.422369 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:44 crc kubenswrapper[5118]: I0121 00:10:44.422379 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:44Z","lastTransitionTime":"2026-01-21T00:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:44 crc kubenswrapper[5118]: I0121 00:10:44.524574 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:44 crc kubenswrapper[5118]: I0121 00:10:44.524627 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:44 crc kubenswrapper[5118]: I0121 00:10:44.524638 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:44 crc kubenswrapper[5118]: I0121 00:10:44.524656 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:44 crc kubenswrapper[5118]: I0121 00:10:44.524668 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:44Z","lastTransitionTime":"2026-01-21T00:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:44 crc kubenswrapper[5118]: I0121 00:10:44.626818 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:44 crc kubenswrapper[5118]: I0121 00:10:44.626908 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:44 crc kubenswrapper[5118]: I0121 00:10:44.626936 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:44 crc kubenswrapper[5118]: I0121 00:10:44.626968 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:44 crc kubenswrapper[5118]: I0121 00:10:44.626988 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:44Z","lastTransitionTime":"2026-01-21T00:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:44 crc kubenswrapper[5118]: I0121 00:10:44.729195 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:44 crc kubenswrapper[5118]: I0121 00:10:44.729259 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:44 crc kubenswrapper[5118]: I0121 00:10:44.729277 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:44 crc kubenswrapper[5118]: I0121 00:10:44.729301 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:44 crc kubenswrapper[5118]: I0121 00:10:44.729321 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:44Z","lastTransitionTime":"2026-01-21T00:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:44 crc kubenswrapper[5118]: I0121 00:10:44.832012 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:44 crc kubenswrapper[5118]: I0121 00:10:44.832068 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:44 crc kubenswrapper[5118]: I0121 00:10:44.832083 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:44 crc kubenswrapper[5118]: I0121 00:10:44.832105 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:44 crc kubenswrapper[5118]: I0121 00:10:44.832117 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:44Z","lastTransitionTime":"2026-01-21T00:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:44 crc kubenswrapper[5118]: I0121 00:10:44.934238 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:44 crc kubenswrapper[5118]: I0121 00:10:44.934313 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:44 crc kubenswrapper[5118]: I0121 00:10:44.934337 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:44 crc kubenswrapper[5118]: I0121 00:10:44.934363 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:44 crc kubenswrapper[5118]: I0121 00:10:44.934385 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:44Z","lastTransitionTime":"2026-01-21T00:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:44 crc kubenswrapper[5118]: I0121 00:10:44.975536 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9hvtf" Jan 21 00:10:44 crc kubenswrapper[5118]: I0121 00:10:44.975623 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 21 00:10:44 crc kubenswrapper[5118]: E0121 00:10:44.975790 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9hvtf" podUID="21105fbf-0225-4ba6-ba90-17808d5250c6" Jan 21 00:10:44 crc kubenswrapper[5118]: I0121 00:10:44.975875 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 00:10:44 crc kubenswrapper[5118]: I0121 00:10:44.976047 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 21 00:10:44 crc kubenswrapper[5118]: E0121 00:10:44.976068 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 21 00:10:44 crc kubenswrapper[5118]: E0121 00:10:44.976253 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 21 00:10:44 crc kubenswrapper[5118]: E0121 00:10:44.976404 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 21 00:10:44 crc kubenswrapper[5118]: I0121 00:10:44.993497 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff4c6b76-3326-4edc-a392-9edcaf197d60\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://69e28ae0052129054be6c0419161beea094bafc8c1cbcdcf5bf3436e7877d421\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b22458e11edc4b0d627c073d6b85af24b2b8e00bc13359490fd9ffac95677b1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b22458e11edc4b0d627c073d6b85af24b2b8e00bc13359490fd9ffac95677b1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:09:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:09:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:09:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:45 crc kubenswrapper[5118]: I0121 00:10:45.012747 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:45 crc kubenswrapper[5118]: I0121 00:10:45.026526 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:32Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:45 crc kubenswrapper[5118]: I0121 00:10:45.036915 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:45 crc kubenswrapper[5118]: I0121 00:10:45.036956 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:45 crc kubenswrapper[5118]: I0121 00:10:45.036967 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:45 crc kubenswrapper[5118]: I0121 00:10:45.036984 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:45 crc kubenswrapper[5118]: I0121 00:10:45.036997 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:45Z","lastTransitionTime":"2026-01-21T00:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:45 crc kubenswrapper[5118]: I0121 00:10:45.045667 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d4lsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0541bb33-5d4a-4ef9-964c-884c727499f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:10:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d4lsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:45 crc kubenswrapper[5118]: I0121 00:10:45.058403 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82b75e4d-eb03-4a0f-b349-9596c36b1f7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://fe7f2363564030a94c2b5aaa62bb4c132cd16d168cb151c5b3b9447b2e1be495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e4ebe604136e4069bea4c85be28e37c74101654b01f8f97f9dd6c5e6d2089369\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://958df862a38bf6ffb6e4b273a5f651ea46a717132f869a0fffedeb2ed05ad6a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4192feefd5cfa0d044d233324b963c0d52fc469217c4faad785dda349a30a38b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4192feefd5cfa0d044d233324b963c0d52fc469217c4faad785dda349a30a38b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T00:10:07Z\\\",\\\"message\\\":\\\"172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nW0121 00:10:06.988883 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 00:10:06.989033 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0121 00:10:06.989980 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1710353325/tls.crt::/tmp/serving-cert-1710353325/tls.key\\\\\\\"\\\\nI0121 00:10:07.300917 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 00:10:07.302784 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 00:10:07.302805 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 00:10:07.302832 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 00:10:07.302838 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 00:10:07.306381 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 00:10:07.306408 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 00:10:07.306414 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 00:10:07.306419 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 00:10:07.306422 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 00:10:07.306426 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 00:10:07.306429 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 00:10:07.306560 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 00:10:07.307535 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T00:10:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5420e1997e1749ea320776a71a52444ce92ddbd5ca9427f3c9efe5b951c27370\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://fe33373ea281d91b409c5cb5d649102effdff8d81e629af326cf6ec982abd031\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe33373ea281d91b409c5cb5d649102effdff8d81e629af326cf6ec982abd031\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:09:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:09:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:09:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:45 crc kubenswrapper[5118]: I0121 00:10:45.068800 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:32Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:45 crc kubenswrapper[5118]: I0121 00:10:45.078649 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:45 crc kubenswrapper[5118]: I0121 00:10:45.091579 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h8fs2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91e46657-55ca-43e7-9a43-6bb875c7debf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfh6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfh6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfh6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfh6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfh6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfh6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfh6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfh6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfh6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:10:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h8fs2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:45 crc kubenswrapper[5118]: I0121 00:10:45.100431 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9sftt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50c45c57-9291-48d3-8022-00a314541104\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5fh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:10:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9sftt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:45 crc kubenswrapper[5118]: I0121 00:10:45.119011 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96ad38d8-0631-494b-8a0c-73936655173c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://7da13cdac196a74d6f3d3fe06fd8b8f1b93152d831e98ee1b66f4bd30f77756b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://f71ef5ad6b3cecbe91cf0fa1e4e8759ddda878222a1c71e9801313336e424458\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://531e890ac624829dfeab5674374a20bf8f80e96fe3ad6baff6532501d078f297\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://93e48b61d0a2e616f65259ffbca42d9d000600a9f57c456e9fafc249cbbfa187\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:09:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:45 crc kubenswrapper[5118]: I0121 00:10:45.129668 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea90c3b6-90f2-4468-8987-cbc4691535cc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://01a6d01cbabb92bffcca05eb808b4bd0bee991f66f129422707d982e4e3d320f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://37fdbdbec8b545e1b3921af5413cad07f8ffa20745589533bc0fffa6ec9a42fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://0f3214d25bbdd49a8a29ce6f30a600024d862102e53bee5c64ac3f0880d97481\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d4b43bf3081fdf5d3820618be5c2c1f9db8f1b4b344beee6c6fb8f37064a0308\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4b43bf3081fdf5d3820618be5c2c1f9db8f1b4b344beee6c6fb8f37064a0308\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:09:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:09:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:09:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:45 crc kubenswrapper[5118]: I0121 00:10:45.138620 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:45 crc kubenswrapper[5118]: I0121 00:10:45.138656 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:45 crc kubenswrapper[5118]: I0121 00:10:45.138669 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:45 crc kubenswrapper[5118]: I0121 00:10:45.138686 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:45 crc kubenswrapper[5118]: I0121 00:10:45.138697 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:45Z","lastTransitionTime":"2026-01-21T00:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:45 crc kubenswrapper[5118]: I0121 00:10:45.140501 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:45 crc kubenswrapper[5118]: I0121 00:10:45.149599 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:45 crc kubenswrapper[5118]: I0121 00:10:45.159226 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-qcqwq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c0390f5-26b4-4299-958c-acac058be619\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5t5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:10:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qcqwq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:45 crc kubenswrapper[5118]: I0121 00:10:45.177453 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8e2c6b8-bbac-4c8c-98aa-eed95855d358\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://9d9f0111a3537cc924a7e201bcd1e6a41bc82e79b86ec8f1d33560c518239fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://9c43bace9e1ec4b78fc3886b886cfc9eb9505e5cd415b54a393092a5fb6bfede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://efa0534fa57e4334809de905bc9c6076a74ca99b2829d2716055befea0eb99ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://898860c9529a12085df4c5531acb1bd4f2bf2dc8acc40c795bef9e642ab80c73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://cde9ebfec14b67069eee7df51b0b8e257d4b7ccb5fc744f7cf08722b62167f08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4d69bc807ea39c1b6e472b26c0ed0618f1da285bdc9fd839f01a7c311aa34ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d69bc807ea39c1b6e472b26c0ed0618f1da285bdc9fd839f01a7c311aa34ef4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:09:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:09:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://afa5b4348d36675413cb2b023cbf9c78e23c0ee25d1298fecd6ef727c36d64ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa5b4348d36675413cb2b023cbf9c78e23c0ee25d1298fecd6ef727c36d64ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:09:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:09:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://b755b446899236ed8c24e77229b7d2ca8cec62643e7c60e7d77fd8a26bdea1ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b755b446899236ed8c24e77229b7d2ca8cec62643e7c60e7d77fd8a26bdea1ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:09:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:09:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:09:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:45 crc kubenswrapper[5118]: I0121 00:10:45.185486 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44eb9bc7-60a3-421c-bf5e-d1d9a5026435\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxsjl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxsjl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:10:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-22r9n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:45 crc kubenswrapper[5118]: I0121 00:10:45.191919 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9hvtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21105fbf-0225-4ba6-ba90-17808d5250c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjsv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjsv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:10:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9hvtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:45 crc kubenswrapper[5118]: I0121 00:10:45.199690 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kzdr6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddc3c284-5d85-4e40-b285-f16062ad8d9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzdws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzdws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:10:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-kzdr6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:45 crc kubenswrapper[5118]: I0121 00:10:45.206724 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-znhzw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acee46d0-3d60-4d08-abbd-b3df00872f90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-962nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:10:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-znhzw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:45 crc kubenswrapper[5118]: I0121 00:10:45.241139 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:45 crc kubenswrapper[5118]: I0121 00:10:45.241206 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:45 crc kubenswrapper[5118]: I0121 00:10:45.241227 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:45 crc kubenswrapper[5118]: I0121 00:10:45.241244 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:45 crc kubenswrapper[5118]: I0121 00:10:45.241256 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:45Z","lastTransitionTime":"2026-01-21T00:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:45 crc kubenswrapper[5118]: I0121 00:10:45.343046 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:45 crc kubenswrapper[5118]: I0121 00:10:45.343324 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:45 crc kubenswrapper[5118]: I0121 00:10:45.343334 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:45 crc kubenswrapper[5118]: I0121 00:10:45.343346 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:45 crc kubenswrapper[5118]: I0121 00:10:45.343354 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:45Z","lastTransitionTime":"2026-01-21T00:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:45 crc kubenswrapper[5118]: I0121 00:10:45.446560 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:45 crc kubenswrapper[5118]: I0121 00:10:45.446601 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:45 crc kubenswrapper[5118]: I0121 00:10:45.446610 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:45 crc kubenswrapper[5118]: I0121 00:10:45.446625 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:45 crc kubenswrapper[5118]: I0121 00:10:45.446635 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:45Z","lastTransitionTime":"2026-01-21T00:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:45 crc kubenswrapper[5118]: I0121 00:10:45.548744 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:45 crc kubenswrapper[5118]: I0121 00:10:45.548790 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:45 crc kubenswrapper[5118]: I0121 00:10:45.548801 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:45 crc kubenswrapper[5118]: I0121 00:10:45.548816 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:45 crc kubenswrapper[5118]: I0121 00:10:45.548827 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:45Z","lastTransitionTime":"2026-01-21T00:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:45 crc kubenswrapper[5118]: I0121 00:10:45.651722 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:45 crc kubenswrapper[5118]: I0121 00:10:45.651786 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:45 crc kubenswrapper[5118]: I0121 00:10:45.651802 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:45 crc kubenswrapper[5118]: I0121 00:10:45.651823 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:45 crc kubenswrapper[5118]: I0121 00:10:45.651844 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:45Z","lastTransitionTime":"2026-01-21T00:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:45 crc kubenswrapper[5118]: I0121 00:10:45.755083 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:45 crc kubenswrapper[5118]: I0121 00:10:45.755190 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:45 crc kubenswrapper[5118]: I0121 00:10:45.755203 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:45 crc kubenswrapper[5118]: I0121 00:10:45.755225 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:45 crc kubenswrapper[5118]: I0121 00:10:45.755241 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:45Z","lastTransitionTime":"2026-01-21T00:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:45 crc kubenswrapper[5118]: I0121 00:10:45.857995 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:45 crc kubenswrapper[5118]: I0121 00:10:45.858091 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:45 crc kubenswrapper[5118]: I0121 00:10:45.858116 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:45 crc kubenswrapper[5118]: I0121 00:10:45.858183 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:45 crc kubenswrapper[5118]: I0121 00:10:45.858211 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:45Z","lastTransitionTime":"2026-01-21T00:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:45 crc kubenswrapper[5118]: I0121 00:10:45.960574 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:45 crc kubenswrapper[5118]: I0121 00:10:45.960908 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:45 crc kubenswrapper[5118]: I0121 00:10:45.961057 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:45 crc kubenswrapper[5118]: I0121 00:10:45.961249 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:45 crc kubenswrapper[5118]: I0121 00:10:45.961395 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:45Z","lastTransitionTime":"2026-01-21T00:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:46 crc kubenswrapper[5118]: I0121 00:10:46.064420 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:46 crc kubenswrapper[5118]: I0121 00:10:46.064460 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:46 crc kubenswrapper[5118]: I0121 00:10:46.064471 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:46 crc kubenswrapper[5118]: I0121 00:10:46.064485 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:46 crc kubenswrapper[5118]: I0121 00:10:46.064496 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:46Z","lastTransitionTime":"2026-01-21T00:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:46 crc kubenswrapper[5118]: I0121 00:10:46.166574 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:46 crc kubenswrapper[5118]: I0121 00:10:46.166614 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:46 crc kubenswrapper[5118]: I0121 00:10:46.166623 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:46 crc kubenswrapper[5118]: I0121 00:10:46.166636 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:46 crc kubenswrapper[5118]: I0121 00:10:46.166647 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:46Z","lastTransitionTime":"2026-01-21T00:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:46 crc kubenswrapper[5118]: I0121 00:10:46.269152 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:46 crc kubenswrapper[5118]: I0121 00:10:46.269224 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:46 crc kubenswrapper[5118]: I0121 00:10:46.269235 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:46 crc kubenswrapper[5118]: I0121 00:10:46.269251 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:46 crc kubenswrapper[5118]: I0121 00:10:46.269261 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:46Z","lastTransitionTime":"2026-01-21T00:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:46 crc kubenswrapper[5118]: I0121 00:10:46.344065 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-znhzw" event={"ID":"acee46d0-3d60-4d08-abbd-b3df00872f90","Type":"ContainerStarted","Data":"6cd62b41a31b580b73d7ea19bbd872e667e3b258a9f8927e7417e71adb48baaf"} Jan 21 00:10:46 crc kubenswrapper[5118]: I0121 00:10:46.346866 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-9sftt" event={"ID":"50c45c57-9291-48d3-8022-00a314541104","Type":"ContainerStarted","Data":"ecf130babd03da45c443de89b17fb55a81f12f8dd6c5980a93e30536346476f2"} Jan 21 00:10:46 crc kubenswrapper[5118]: I0121 00:10:46.360335 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d4lsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0541bb33-5d4a-4ef9-964c-884c727499f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:10:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d4lsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:46 crc kubenswrapper[5118]: I0121 00:10:46.371494 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:46 crc kubenswrapper[5118]: I0121 00:10:46.371539 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:46 crc kubenswrapper[5118]: I0121 00:10:46.371549 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:46 crc kubenswrapper[5118]: I0121 00:10:46.371565 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:46 crc kubenswrapper[5118]: I0121 00:10:46.371574 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:46Z","lastTransitionTime":"2026-01-21T00:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:46 crc kubenswrapper[5118]: I0121 00:10:46.373639 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82b75e4d-eb03-4a0f-b349-9596c36b1f7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://fe7f2363564030a94c2b5aaa62bb4c132cd16d168cb151c5b3b9447b2e1be495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e4ebe604136e4069bea4c85be28e37c74101654b01f8f97f9dd6c5e6d2089369\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://958df862a38bf6ffb6e4b273a5f651ea46a717132f869a0fffedeb2ed05ad6a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4192feefd5cfa0d044d233324b963c0d52fc469217c4faad785dda349a30a38b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4192feefd5cfa0d044d233324b963c0d52fc469217c4faad785dda349a30a38b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T00:10:07Z\\\",\\\"message\\\":\\\"172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nW0121 00:10:06.988883 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 00:10:06.989033 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0121 00:10:06.989980 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1710353325/tls.crt::/tmp/serving-cert-1710353325/tls.key\\\\\\\"\\\\nI0121 00:10:07.300917 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 00:10:07.302784 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 00:10:07.302805 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 00:10:07.302832 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 00:10:07.302838 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 00:10:07.306381 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 00:10:07.306408 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 00:10:07.306414 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 00:10:07.306419 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 00:10:07.306422 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 00:10:07.306426 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 00:10:07.306429 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 00:10:07.306560 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 00:10:07.307535 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T00:10:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5420e1997e1749ea320776a71a52444ce92ddbd5ca9427f3c9efe5b951c27370\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://fe33373ea281d91b409c5cb5d649102effdff8d81e629af326cf6ec982abd031\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe33373ea281d91b409c5cb5d649102effdff8d81e629af326cf6ec982abd031\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:09:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:09:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:09:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:46 crc kubenswrapper[5118]: I0121 00:10:46.384692 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:32Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:46 crc kubenswrapper[5118]: I0121 00:10:46.394667 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:46 crc kubenswrapper[5118]: I0121 00:10:46.412248 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h8fs2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91e46657-55ca-43e7-9a43-6bb875c7debf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfh6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfh6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfh6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfh6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfh6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfh6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfh6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfh6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfh6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:10:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h8fs2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:46 crc kubenswrapper[5118]: I0121 00:10:46.420180 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9sftt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50c45c57-9291-48d3-8022-00a314541104\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5fh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:10:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9sftt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:46 crc kubenswrapper[5118]: I0121 00:10:46.430874 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96ad38d8-0631-494b-8a0c-73936655173c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://7da13cdac196a74d6f3d3fe06fd8b8f1b93152d831e98ee1b66f4bd30f77756b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://f71ef5ad6b3cecbe91cf0fa1e4e8759ddda878222a1c71e9801313336e424458\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://531e890ac624829dfeab5674374a20bf8f80e96fe3ad6baff6532501d078f297\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://93e48b61d0a2e616f65259ffbca42d9d000600a9f57c456e9fafc249cbbfa187\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:09:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:46 crc kubenswrapper[5118]: I0121 00:10:46.440194 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea90c3b6-90f2-4468-8987-cbc4691535cc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://01a6d01cbabb92bffcca05eb808b4bd0bee991f66f129422707d982e4e3d320f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://37fdbdbec8b545e1b3921af5413cad07f8ffa20745589533bc0fffa6ec9a42fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://0f3214d25bbdd49a8a29ce6f30a600024d862102e53bee5c64ac3f0880d97481\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d4b43bf3081fdf5d3820618be5c2c1f9db8f1b4b344beee6c6fb8f37064a0308\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4b43bf3081fdf5d3820618be5c2c1f9db8f1b4b344beee6c6fb8f37064a0308\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:09:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:09:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:09:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:46 crc kubenswrapper[5118]: I0121 00:10:46.449519 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:46 crc kubenswrapper[5118]: I0121 00:10:46.458105 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:46 crc kubenswrapper[5118]: I0121 00:10:46.469354 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-qcqwq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c0390f5-26b4-4299-958c-acac058be619\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5t5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:10:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qcqwq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:46 crc kubenswrapper[5118]: I0121 00:10:46.473815 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:46 crc kubenswrapper[5118]: I0121 00:10:46.473847 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:46 crc kubenswrapper[5118]: I0121 00:10:46.473857 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:46 crc kubenswrapper[5118]: I0121 00:10:46.473872 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:46 crc kubenswrapper[5118]: I0121 00:10:46.473882 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:46Z","lastTransitionTime":"2026-01-21T00:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:46 crc kubenswrapper[5118]: I0121 00:10:46.491545 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8e2c6b8-bbac-4c8c-98aa-eed95855d358\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://9d9f0111a3537cc924a7e201bcd1e6a41bc82e79b86ec8f1d33560c518239fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://9c43bace9e1ec4b78fc3886b886cfc9eb9505e5cd415b54a393092a5fb6bfede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://efa0534fa57e4334809de905bc9c6076a74ca99b2829d2716055befea0eb99ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://898860c9529a12085df4c5531acb1bd4f2bf2dc8acc40c795bef9e642ab80c73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://cde9ebfec14b67069eee7df51b0b8e257d4b7ccb5fc744f7cf08722b62167f08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4d69bc807ea39c1b6e472b26c0ed0618f1da285bdc9fd839f01a7c311aa34ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d69bc807ea39c1b6e472b26c0ed0618f1da285bdc9fd839f01a7c311aa34ef4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:09:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:09:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://afa5b4348d36675413cb2b023cbf9c78e23c0ee25d1298fecd6ef727c36d64ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa5b4348d36675413cb2b023cbf9c78e23c0ee25d1298fecd6ef727c36d64ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:09:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:09:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://b755b446899236ed8c24e77229b7d2ca8cec62643e7c60e7d77fd8a26bdea1ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b755b446899236ed8c24e77229b7d2ca8cec62643e7c60e7d77fd8a26bdea1ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:09:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:09:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:09:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:46 crc kubenswrapper[5118]: I0121 00:10:46.503879 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44eb9bc7-60a3-421c-bf5e-d1d9a5026435\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxsjl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxsjl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:10:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-22r9n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:46 crc kubenswrapper[5118]: I0121 00:10:46.514963 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9hvtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21105fbf-0225-4ba6-ba90-17808d5250c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjsv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjsv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:10:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9hvtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:46 crc kubenswrapper[5118]: I0121 00:10:46.525550 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kzdr6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddc3c284-5d85-4e40-b285-f16062ad8d9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzdws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzdws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:10:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-kzdr6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:46 crc kubenswrapper[5118]: I0121 00:10:46.531861 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-znhzw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acee46d0-3d60-4d08-abbd-b3df00872f90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"},\\\"containerID\\\":\\\"cri-o://6cd62b41a31b580b73d7ea19bbd872e667e3b258a9f8927e7417e71adb48baaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:10:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-962nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:10:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-znhzw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:46 crc kubenswrapper[5118]: I0121 00:10:46.540894 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff4c6b76-3326-4edc-a392-9edcaf197d60\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://69e28ae0052129054be6c0419161beea094bafc8c1cbcdcf5bf3436e7877d421\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b22458e11edc4b0d627c073d6b85af24b2b8e00bc13359490fd9ffac95677b1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b22458e11edc4b0d627c073d6b85af24b2b8e00bc13359490fd9ffac95677b1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:09:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:09:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:09:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:46 crc kubenswrapper[5118]: I0121 00:10:46.551781 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:46 crc kubenswrapper[5118]: I0121 00:10:46.563403 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:32Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:46 crc kubenswrapper[5118]: I0121 00:10:46.576064 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:46 crc kubenswrapper[5118]: I0121 00:10:46.576114 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:46 crc kubenswrapper[5118]: I0121 00:10:46.576127 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:46 crc kubenswrapper[5118]: I0121 00:10:46.576144 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:46 crc kubenswrapper[5118]: I0121 00:10:46.576171 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:46Z","lastTransitionTime":"2026-01-21T00:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:46 crc kubenswrapper[5118]: I0121 00:10:46.578355 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82b75e4d-eb03-4a0f-b349-9596c36b1f7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://fe7f2363564030a94c2b5aaa62bb4c132cd16d168cb151c5b3b9447b2e1be495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e4ebe604136e4069bea4c85be28e37c74101654b01f8f97f9dd6c5e6d2089369\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://958df862a38bf6ffb6e4b273a5f651ea46a717132f869a0fffedeb2ed05ad6a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4192feefd5cfa0d044d233324b963c0d52fc469217c4faad785dda349a30a38b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4192feefd5cfa0d044d233324b963c0d52fc469217c4faad785dda349a30a38b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T00:10:07Z\\\",\\\"message\\\":\\\"172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nW0121 00:10:06.988883 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 00:10:06.989033 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0121 00:10:06.989980 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1710353325/tls.crt::/tmp/serving-cert-1710353325/tls.key\\\\\\\"\\\\nI0121 00:10:07.300917 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 00:10:07.302784 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 00:10:07.302805 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 00:10:07.302832 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 00:10:07.302838 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 00:10:07.306381 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 00:10:07.306408 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 00:10:07.306414 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 00:10:07.306419 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 00:10:07.306422 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 00:10:07.306426 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 00:10:07.306429 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 00:10:07.306560 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 00:10:07.307535 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T00:10:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5420e1997e1749ea320776a71a52444ce92ddbd5ca9427f3c9efe5b951c27370\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://fe33373ea281d91b409c5cb5d649102effdff8d81e629af326cf6ec982abd031\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe33373ea281d91b409c5cb5d649102effdff8d81e629af326cf6ec982abd031\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:09:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:09:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:09:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:46 crc kubenswrapper[5118]: I0121 00:10:46.588903 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:32Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:46 crc kubenswrapper[5118]: I0121 00:10:46.599332 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:46 crc kubenswrapper[5118]: I0121 00:10:46.614831 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h8fs2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91e46657-55ca-43e7-9a43-6bb875c7debf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfh6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfh6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfh6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfh6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfh6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfh6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfh6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfh6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfh6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:10:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h8fs2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:46 crc kubenswrapper[5118]: I0121 00:10:46.622196 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9sftt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50c45c57-9291-48d3-8022-00a314541104\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"},\\\"containerID\\\":\\\"cri-o://ecf130babd03da45c443de89b17fb55a81f12f8dd6c5980a93e30536346476f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:10:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":1001}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5fh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:10:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9sftt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:46 crc kubenswrapper[5118]: I0121 00:10:46.633405 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96ad38d8-0631-494b-8a0c-73936655173c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://7da13cdac196a74d6f3d3fe06fd8b8f1b93152d831e98ee1b66f4bd30f77756b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://f71ef5ad6b3cecbe91cf0fa1e4e8759ddda878222a1c71e9801313336e424458\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://531e890ac624829dfeab5674374a20bf8f80e96fe3ad6baff6532501d078f297\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://93e48b61d0a2e616f65259ffbca42d9d000600a9f57c456e9fafc249cbbfa187\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:09:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:46 crc kubenswrapper[5118]: I0121 00:10:46.641688 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea90c3b6-90f2-4468-8987-cbc4691535cc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://01a6d01cbabb92bffcca05eb808b4bd0bee991f66f129422707d982e4e3d320f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://37fdbdbec8b545e1b3921af5413cad07f8ffa20745589533bc0fffa6ec9a42fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://0f3214d25bbdd49a8a29ce6f30a600024d862102e53bee5c64ac3f0880d97481\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d4b43bf3081fdf5d3820618be5c2c1f9db8f1b4b344beee6c6fb8f37064a0308\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4b43bf3081fdf5d3820618be5c2c1f9db8f1b4b344beee6c6fb8f37064a0308\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:09:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:09:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:09:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:46 crc kubenswrapper[5118]: I0121 00:10:46.651623 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:46 crc kubenswrapper[5118]: I0121 00:10:46.661120 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:46 crc kubenswrapper[5118]: I0121 00:10:46.675199 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-qcqwq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c0390f5-26b4-4299-958c-acac058be619\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5t5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:10:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qcqwq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:46 crc kubenswrapper[5118]: I0121 00:10:46.678798 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:46 crc kubenswrapper[5118]: I0121 00:10:46.678936 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:46 crc kubenswrapper[5118]: I0121 00:10:46.679023 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:46 crc kubenswrapper[5118]: I0121 00:10:46.679113 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:46 crc kubenswrapper[5118]: I0121 00:10:46.679248 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:46Z","lastTransitionTime":"2026-01-21T00:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:46 crc kubenswrapper[5118]: I0121 00:10:46.692958 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8e2c6b8-bbac-4c8c-98aa-eed95855d358\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://9d9f0111a3537cc924a7e201bcd1e6a41bc82e79b86ec8f1d33560c518239fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://9c43bace9e1ec4b78fc3886b886cfc9eb9505e5cd415b54a393092a5fb6bfede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://efa0534fa57e4334809de905bc9c6076a74ca99b2829d2716055befea0eb99ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://898860c9529a12085df4c5531acb1bd4f2bf2dc8acc40c795bef9e642ab80c73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://cde9ebfec14b67069eee7df51b0b8e257d4b7ccb5fc744f7cf08722b62167f08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4d69bc807ea39c1b6e472b26c0ed0618f1da285bdc9fd839f01a7c311aa34ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d69bc807ea39c1b6e472b26c0ed0618f1da285bdc9fd839f01a7c311aa34ef4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:09:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:09:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://afa5b4348d36675413cb2b023cbf9c78e23c0ee25d1298fecd6ef727c36d64ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa5b4348d36675413cb2b023cbf9c78e23c0ee25d1298fecd6ef727c36d64ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:09:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:09:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://b755b446899236ed8c24e77229b7d2ca8cec62643e7c60e7d77fd8a26bdea1ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b755b446899236ed8c24e77229b7d2ca8cec62643e7c60e7d77fd8a26bdea1ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:09:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:09:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:09:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:46 crc kubenswrapper[5118]: I0121 00:10:46.701987 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44eb9bc7-60a3-421c-bf5e-d1d9a5026435\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxsjl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxsjl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:10:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-22r9n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:46 crc kubenswrapper[5118]: I0121 00:10:46.710431 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9hvtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21105fbf-0225-4ba6-ba90-17808d5250c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjsv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjsv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:10:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9hvtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:46 crc kubenswrapper[5118]: I0121 00:10:46.719973 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kzdr6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddc3c284-5d85-4e40-b285-f16062ad8d9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzdws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzdws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:10:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-kzdr6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:46 crc kubenswrapper[5118]: I0121 00:10:46.727982 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-znhzw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acee46d0-3d60-4d08-abbd-b3df00872f90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"},\\\"containerID\\\":\\\"cri-o://6cd62b41a31b580b73d7ea19bbd872e667e3b258a9f8927e7417e71adb48baaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:10:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-962nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:10:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-znhzw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:46 crc kubenswrapper[5118]: I0121 00:10:46.735791 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff4c6b76-3326-4edc-a392-9edcaf197d60\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://69e28ae0052129054be6c0419161beea094bafc8c1cbcdcf5bf3436e7877d421\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b22458e11edc4b0d627c073d6b85af24b2b8e00bc13359490fd9ffac95677b1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b22458e11edc4b0d627c073d6b85af24b2b8e00bc13359490fd9ffac95677b1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:09:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:09:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:09:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:46 crc kubenswrapper[5118]: I0121 00:10:46.744176 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:46 crc kubenswrapper[5118]: I0121 00:10:46.754341 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:32Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:46 crc kubenswrapper[5118]: I0121 00:10:46.767426 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d4lsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0541bb33-5d4a-4ef9-964c-884c727499f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:10:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d4lsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:46 crc kubenswrapper[5118]: I0121 00:10:46.781005 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:46 crc kubenswrapper[5118]: I0121 00:10:46.781040 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:46 crc kubenswrapper[5118]: I0121 00:10:46.781051 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:46 crc kubenswrapper[5118]: I0121 00:10:46.781068 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:46 crc kubenswrapper[5118]: I0121 00:10:46.781078 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:46Z","lastTransitionTime":"2026-01-21T00:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:46 crc kubenswrapper[5118]: I0121 00:10:46.882619 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:46 crc kubenswrapper[5118]: I0121 00:10:46.882689 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:46 crc kubenswrapper[5118]: I0121 00:10:46.882701 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:46 crc kubenswrapper[5118]: I0121 00:10:46.882715 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:46 crc kubenswrapper[5118]: I0121 00:10:46.882727 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:46Z","lastTransitionTime":"2026-01-21T00:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:46 crc kubenswrapper[5118]: I0121 00:10:46.974797 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 00:10:46 crc kubenswrapper[5118]: I0121 00:10:46.974940 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 21 00:10:46 crc kubenswrapper[5118]: I0121 00:10:46.974977 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9hvtf" Jan 21 00:10:46 crc kubenswrapper[5118]: I0121 00:10:46.975152 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 21 00:10:46 crc kubenswrapper[5118]: E0121 00:10:46.975150 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9hvtf" podUID="21105fbf-0225-4ba6-ba90-17808d5250c6" Jan 21 00:10:46 crc kubenswrapper[5118]: E0121 00:10:46.975256 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 21 00:10:46 crc kubenswrapper[5118]: E0121 00:10:46.975570 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 21 00:10:46 crc kubenswrapper[5118]: E0121 00:10:46.975722 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 21 00:10:46 crc kubenswrapper[5118]: I0121 00:10:46.984605 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:46 crc kubenswrapper[5118]: I0121 00:10:46.984662 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:46 crc kubenswrapper[5118]: I0121 00:10:46.984678 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:46 crc kubenswrapper[5118]: I0121 00:10:46.984697 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:46 crc kubenswrapper[5118]: I0121 00:10:46.984712 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:46Z","lastTransitionTime":"2026-01-21T00:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:47 crc kubenswrapper[5118]: I0121 00:10:47.086562 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:47 crc kubenswrapper[5118]: I0121 00:10:47.086609 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:47 crc kubenswrapper[5118]: I0121 00:10:47.086620 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:47 crc kubenswrapper[5118]: I0121 00:10:47.086634 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:47 crc kubenswrapper[5118]: I0121 00:10:47.086644 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:47Z","lastTransitionTime":"2026-01-21T00:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:47 crc kubenswrapper[5118]: I0121 00:10:47.188860 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:47 crc kubenswrapper[5118]: I0121 00:10:47.188908 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:47 crc kubenswrapper[5118]: I0121 00:10:47.188920 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:47 crc kubenswrapper[5118]: I0121 00:10:47.188935 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:47 crc kubenswrapper[5118]: I0121 00:10:47.188969 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:47Z","lastTransitionTime":"2026-01-21T00:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:47 crc kubenswrapper[5118]: I0121 00:10:47.291881 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:47 crc kubenswrapper[5118]: I0121 00:10:47.291953 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:47 crc kubenswrapper[5118]: I0121 00:10:47.291965 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:47 crc kubenswrapper[5118]: I0121 00:10:47.291984 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:47 crc kubenswrapper[5118]: I0121 00:10:47.291998 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:47Z","lastTransitionTime":"2026-01-21T00:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:47 crc kubenswrapper[5118]: I0121 00:10:47.394651 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:47 crc kubenswrapper[5118]: I0121 00:10:47.394682 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:47 crc kubenswrapper[5118]: I0121 00:10:47.394691 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:47 crc kubenswrapper[5118]: I0121 00:10:47.394703 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:47 crc kubenswrapper[5118]: I0121 00:10:47.394712 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:47Z","lastTransitionTime":"2026-01-21T00:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:47 crc kubenswrapper[5118]: I0121 00:10:47.496631 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:47 crc kubenswrapper[5118]: I0121 00:10:47.496691 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:47 crc kubenswrapper[5118]: I0121 00:10:47.496707 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:47 crc kubenswrapper[5118]: I0121 00:10:47.496726 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:47 crc kubenswrapper[5118]: I0121 00:10:47.496741 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:47Z","lastTransitionTime":"2026-01-21T00:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:47 crc kubenswrapper[5118]: I0121 00:10:47.598249 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:47 crc kubenswrapper[5118]: I0121 00:10:47.598284 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:47 crc kubenswrapper[5118]: I0121 00:10:47.598292 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:47 crc kubenswrapper[5118]: I0121 00:10:47.598304 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:47 crc kubenswrapper[5118]: I0121 00:10:47.598313 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:47Z","lastTransitionTime":"2026-01-21T00:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:47 crc kubenswrapper[5118]: I0121 00:10:47.700938 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:47 crc kubenswrapper[5118]: I0121 00:10:47.700986 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:47 crc kubenswrapper[5118]: I0121 00:10:47.700997 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:47 crc kubenswrapper[5118]: I0121 00:10:47.701012 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:47 crc kubenswrapper[5118]: I0121 00:10:47.701024 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:47Z","lastTransitionTime":"2026-01-21T00:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:47 crc kubenswrapper[5118]: I0121 00:10:47.803524 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:47 crc kubenswrapper[5118]: I0121 00:10:47.803602 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:47 crc kubenswrapper[5118]: I0121 00:10:47.803620 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:47 crc kubenswrapper[5118]: I0121 00:10:47.803645 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:47 crc kubenswrapper[5118]: I0121 00:10:47.803662 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:47Z","lastTransitionTime":"2026-01-21T00:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:47 crc kubenswrapper[5118]: I0121 00:10:47.906217 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:47 crc kubenswrapper[5118]: I0121 00:10:47.906298 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:47 crc kubenswrapper[5118]: I0121 00:10:47.906338 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:47 crc kubenswrapper[5118]: I0121 00:10:47.906371 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:47 crc kubenswrapper[5118]: I0121 00:10:47.906401 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:47Z","lastTransitionTime":"2026-01-21T00:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:48 crc kubenswrapper[5118]: I0121 00:10:48.008856 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:48 crc kubenswrapper[5118]: I0121 00:10:48.008929 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:48 crc kubenswrapper[5118]: I0121 00:10:48.008956 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:48 crc kubenswrapper[5118]: I0121 00:10:48.008988 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:48 crc kubenswrapper[5118]: I0121 00:10:48.009012 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:48Z","lastTransitionTime":"2026-01-21T00:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:48 crc kubenswrapper[5118]: I0121 00:10:48.111301 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:48 crc kubenswrapper[5118]: I0121 00:10:48.111364 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:48 crc kubenswrapper[5118]: I0121 00:10:48.111392 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:48 crc kubenswrapper[5118]: I0121 00:10:48.111421 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:48 crc kubenswrapper[5118]: I0121 00:10:48.111444 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:48Z","lastTransitionTime":"2026-01-21T00:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:48 crc kubenswrapper[5118]: I0121 00:10:48.213796 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:48 crc kubenswrapper[5118]: I0121 00:10:48.213864 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:48 crc kubenswrapper[5118]: I0121 00:10:48.213887 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:48 crc kubenswrapper[5118]: I0121 00:10:48.213917 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:48 crc kubenswrapper[5118]: I0121 00:10:48.213943 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:48Z","lastTransitionTime":"2026-01-21T00:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:48 crc kubenswrapper[5118]: I0121 00:10:48.316583 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:48 crc kubenswrapper[5118]: I0121 00:10:48.316656 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:48 crc kubenswrapper[5118]: I0121 00:10:48.316674 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:48 crc kubenswrapper[5118]: I0121 00:10:48.316696 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:48 crc kubenswrapper[5118]: I0121 00:10:48.316715 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:48Z","lastTransitionTime":"2026-01-21T00:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:48 crc kubenswrapper[5118]: I0121 00:10:48.419132 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:48 crc kubenswrapper[5118]: I0121 00:10:48.419245 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:48 crc kubenswrapper[5118]: I0121 00:10:48.419273 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:48 crc kubenswrapper[5118]: I0121 00:10:48.419302 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:48 crc kubenswrapper[5118]: I0121 00:10:48.419325 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:48Z","lastTransitionTime":"2026-01-21T00:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:48 crc kubenswrapper[5118]: I0121 00:10:48.522366 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:48 crc kubenswrapper[5118]: I0121 00:10:48.522431 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:48 crc kubenswrapper[5118]: I0121 00:10:48.522450 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:48 crc kubenswrapper[5118]: I0121 00:10:48.522474 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:48 crc kubenswrapper[5118]: I0121 00:10:48.522491 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:48Z","lastTransitionTime":"2026-01-21T00:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:48 crc kubenswrapper[5118]: I0121 00:10:48.624633 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:48 crc kubenswrapper[5118]: I0121 00:10:48.624720 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:48 crc kubenswrapper[5118]: I0121 00:10:48.624733 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:48 crc kubenswrapper[5118]: I0121 00:10:48.624752 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:48 crc kubenswrapper[5118]: I0121 00:10:48.624766 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:48Z","lastTransitionTime":"2026-01-21T00:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:48 crc kubenswrapper[5118]: I0121 00:10:48.715950 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 00:10:48 crc kubenswrapper[5118]: I0121 00:10:48.716077 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 00:10:48 crc kubenswrapper[5118]: E0121 00:10:48.716399 5118 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 00:10:48 crc kubenswrapper[5118]: E0121 00:10:48.716414 5118 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 00:10:48 crc kubenswrapper[5118]: E0121 00:10:48.716618 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-21 00:11:04.716485439 +0000 UTC m=+120.040732497 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 00:10:48 crc kubenswrapper[5118]: E0121 00:10:48.716665 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-21 00:11:04.716631243 +0000 UTC m=+120.040878311 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 00:10:48 crc kubenswrapper[5118]: I0121 00:10:48.726933 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:48 crc kubenswrapper[5118]: I0121 00:10:48.726983 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:48 crc kubenswrapper[5118]: I0121 00:10:48.726999 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:48 crc kubenswrapper[5118]: I0121 00:10:48.727025 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:48 crc kubenswrapper[5118]: I0121 00:10:48.727042 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:48Z","lastTransitionTime":"2026-01-21T00:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:48 crc kubenswrapper[5118]: I0121 00:10:48.817240 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 21 00:10:48 crc kubenswrapper[5118]: I0121 00:10:48.817311 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 21 00:10:48 crc kubenswrapper[5118]: E0121 00:10:48.817500 5118 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 00:10:48 crc kubenswrapper[5118]: E0121 00:10:48.817523 5118 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 00:10:48 crc kubenswrapper[5118]: E0121 00:10:48.817544 5118 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 00:10:48 crc kubenswrapper[5118]: E0121 00:10:48.817557 5118 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 00:10:48 crc kubenswrapper[5118]: E0121 00:10:48.817604 5118 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 00:10:48 crc kubenswrapper[5118]: E0121 00:10:48.817626 5118 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 00:10:48 crc kubenswrapper[5118]: E0121 00:10:48.817642 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-21 00:11:04.817607324 +0000 UTC m=+120.141854382 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 00:10:48 crc kubenswrapper[5118]: E0121 00:10:48.817725 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-21 00:11:04.817699147 +0000 UTC m=+120.141946195 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 00:10:48 crc kubenswrapper[5118]: I0121 00:10:48.830075 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:48 crc kubenswrapper[5118]: I0121 00:10:48.830124 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:48 crc kubenswrapper[5118]: I0121 00:10:48.830144 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:48 crc kubenswrapper[5118]: I0121 00:10:48.830203 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:48 crc kubenswrapper[5118]: I0121 00:10:48.830219 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:48Z","lastTransitionTime":"2026-01-21T00:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:48 crc kubenswrapper[5118]: I0121 00:10:48.933520 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:48 crc kubenswrapper[5118]: I0121 00:10:48.933602 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:48 crc kubenswrapper[5118]: I0121 00:10:48.933627 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:48 crc kubenswrapper[5118]: I0121 00:10:48.933658 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:48 crc kubenswrapper[5118]: I0121 00:10:48.933683 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:48Z","lastTransitionTime":"2026-01-21T00:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:48 crc kubenswrapper[5118]: I0121 00:10:48.975457 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9hvtf" Jan 21 00:10:48 crc kubenswrapper[5118]: E0121 00:10:48.975638 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9hvtf" podUID="21105fbf-0225-4ba6-ba90-17808d5250c6" Jan 21 00:10:48 crc kubenswrapper[5118]: I0121 00:10:48.975732 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 21 00:10:48 crc kubenswrapper[5118]: I0121 00:10:48.976482 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 21 00:10:48 crc kubenswrapper[5118]: E0121 00:10:48.976592 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 21 00:10:48 crc kubenswrapper[5118]: I0121 00:10:48.976684 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 00:10:48 crc kubenswrapper[5118]: E0121 00:10:48.976811 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 21 00:10:48 crc kubenswrapper[5118]: E0121 00:10:48.977095 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 21 00:10:49 crc kubenswrapper[5118]: I0121 00:10:49.035683 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:49 crc kubenswrapper[5118]: I0121 00:10:49.035730 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:49 crc kubenswrapper[5118]: I0121 00:10:49.035745 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:49 crc kubenswrapper[5118]: I0121 00:10:49.035764 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:49 crc kubenswrapper[5118]: I0121 00:10:49.035783 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:49Z","lastTransitionTime":"2026-01-21T00:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:49 crc kubenswrapper[5118]: I0121 00:10:49.136910 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:49 crc kubenswrapper[5118]: I0121 00:10:49.136948 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:49 crc kubenswrapper[5118]: I0121 00:10:49.136956 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:49 crc kubenswrapper[5118]: I0121 00:10:49.136970 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:49 crc kubenswrapper[5118]: I0121 00:10:49.136980 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:49Z","lastTransitionTime":"2026-01-21T00:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:49 crc kubenswrapper[5118]: I0121 00:10:49.168467 5118 scope.go:117] "RemoveContainer" containerID="4192feefd5cfa0d044d233324b963c0d52fc469217c4faad785dda349a30a38b" Jan 21 00:10:49 crc kubenswrapper[5118]: I0121 00:10:49.240536 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:49 crc kubenswrapper[5118]: I0121 00:10:49.240966 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:49 crc kubenswrapper[5118]: I0121 00:10:49.240982 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:49 crc kubenswrapper[5118]: I0121 00:10:49.240999 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:49 crc kubenswrapper[5118]: I0121 00:10:49.241010 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:49Z","lastTransitionTime":"2026-01-21T00:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:49 crc kubenswrapper[5118]: I0121 00:10:49.343071 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:49 crc kubenswrapper[5118]: I0121 00:10:49.343104 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:49 crc kubenswrapper[5118]: I0121 00:10:49.343113 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:49 crc kubenswrapper[5118]: I0121 00:10:49.343125 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:49 crc kubenswrapper[5118]: I0121 00:10:49.343135 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:49Z","lastTransitionTime":"2026-01-21T00:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:49 crc kubenswrapper[5118]: I0121 00:10:49.369268 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" event={"ID":"34177974-8d82-49d2-a763-391d0df3bbd8","Type":"ContainerStarted","Data":"d00f57d42fb7c4fd9a0c1aae39c2c8e5cd548479e0a5c0d2f264a7df5d4efca3"} Jan 21 00:10:49 crc kubenswrapper[5118]: I0121 00:10:49.371791 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kzdr6" event={"ID":"ddc3c284-5d85-4e40-b285-f16062ad8d9c","Type":"ContainerStarted","Data":"9637b49ba947d6911df74f39bce845599b33208468341e274f87b282be511959"} Jan 21 00:10:49 crc kubenswrapper[5118]: I0121 00:10:49.379653 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44eb9bc7-60a3-421c-bf5e-d1d9a5026435\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxsjl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxsjl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:10:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-22r9n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:49 crc kubenswrapper[5118]: I0121 00:10:49.391561 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9hvtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21105fbf-0225-4ba6-ba90-17808d5250c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjsv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjsv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:10:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9hvtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:49 crc kubenswrapper[5118]: I0121 00:10:49.400953 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kzdr6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddc3c284-5d85-4e40-b285-f16062ad8d9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzdws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzdws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:10:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-kzdr6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:49 crc kubenswrapper[5118]: I0121 00:10:49.408679 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-znhzw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acee46d0-3d60-4d08-abbd-b3df00872f90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"},\\\"containerID\\\":\\\"cri-o://6cd62b41a31b580b73d7ea19bbd872e667e3b258a9f8927e7417e71adb48baaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:10:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-962nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:10:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-znhzw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:49 crc kubenswrapper[5118]: I0121 00:10:49.417339 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff4c6b76-3326-4edc-a392-9edcaf197d60\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://69e28ae0052129054be6c0419161beea094bafc8c1cbcdcf5bf3436e7877d421\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b22458e11edc4b0d627c073d6b85af24b2b8e00bc13359490fd9ffac95677b1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b22458e11edc4b0d627c073d6b85af24b2b8e00bc13359490fd9ffac95677b1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:09:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:09:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:09:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:49 crc kubenswrapper[5118]: I0121 00:10:49.424642 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 00:10:49 crc kubenswrapper[5118]: I0121 00:10:49.424780 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/21105fbf-0225-4ba6-ba90-17808d5250c6-metrics-certs\") pod \"network-metrics-daemon-9hvtf\" (UID: \"21105fbf-0225-4ba6-ba90-17808d5250c6\") " pod="openshift-multus/network-metrics-daemon-9hvtf" Jan 21 00:10:49 crc kubenswrapper[5118]: E0121 00:10:49.424882 5118 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 00:10:49 crc kubenswrapper[5118]: E0121 00:10:49.424928 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/21105fbf-0225-4ba6-ba90-17808d5250c6-metrics-certs podName:21105fbf-0225-4ba6-ba90-17808d5250c6 nodeName:}" failed. No retries permitted until 2026-01-21 00:11:05.424913987 +0000 UTC m=+120.749161005 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/21105fbf-0225-4ba6-ba90-17808d5250c6-metrics-certs") pod "network-metrics-daemon-9hvtf" (UID: "21105fbf-0225-4ba6-ba90-17808d5250c6") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 00:10:49 crc kubenswrapper[5118]: E0121 00:10:49.424973 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 00:11:05.424966939 +0000 UTC m=+120.749213957 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:10:49 crc kubenswrapper[5118]: I0121 00:10:49.429064 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:49 crc kubenswrapper[5118]: I0121 00:10:49.438313 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:32Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:49 crc kubenswrapper[5118]: I0121 00:10:49.444495 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:49 crc kubenswrapper[5118]: I0121 00:10:49.444522 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:49 crc kubenswrapper[5118]: I0121 00:10:49.444531 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:49 crc kubenswrapper[5118]: I0121 00:10:49.444543 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:49 crc kubenswrapper[5118]: I0121 00:10:49.444552 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:49Z","lastTransitionTime":"2026-01-21T00:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:49 crc kubenswrapper[5118]: I0121 00:10:49.449572 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d4lsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0541bb33-5d4a-4ef9-964c-884c727499f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmjvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:10:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d4lsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:49 crc kubenswrapper[5118]: I0121 00:10:49.461262 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82b75e4d-eb03-4a0f-b349-9596c36b1f7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://fe7f2363564030a94c2b5aaa62bb4c132cd16d168cb151c5b3b9447b2e1be495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e4ebe604136e4069bea4c85be28e37c74101654b01f8f97f9dd6c5e6d2089369\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://958df862a38bf6ffb6e4b273a5f651ea46a717132f869a0fffedeb2ed05ad6a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4192feefd5cfa0d044d233324b963c0d52fc469217c4faad785dda349a30a38b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4192feefd5cfa0d044d233324b963c0d52fc469217c4faad785dda349a30a38b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T00:10:07Z\\\",\\\"message\\\":\\\"172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nW0121 00:10:06.988883 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 00:10:06.989033 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0121 00:10:06.989980 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1710353325/tls.crt::/tmp/serving-cert-1710353325/tls.key\\\\\\\"\\\\nI0121 00:10:07.300917 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 00:10:07.302784 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 00:10:07.302805 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 00:10:07.302832 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 00:10:07.302838 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 00:10:07.306381 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 00:10:07.306408 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 00:10:07.306414 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 00:10:07.306419 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 00:10:07.306422 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 00:10:07.306426 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 00:10:07.306429 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 00:10:07.306560 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 00:10:07.307535 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T00:10:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5420e1997e1749ea320776a71a52444ce92ddbd5ca9427f3c9efe5b951c27370\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://fe33373ea281d91b409c5cb5d649102effdff8d81e629af326cf6ec982abd031\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe33373ea281d91b409c5cb5d649102effdff8d81e629af326cf6ec982abd031\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:09:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:09:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:09:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:49 crc kubenswrapper[5118]: I0121 00:10:49.472578 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:32Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:49 crc kubenswrapper[5118]: I0121 00:10:49.484891 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:49 crc kubenswrapper[5118]: I0121 00:10:49.501075 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h8fs2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91e46657-55ca-43e7-9a43-6bb875c7debf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfh6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfh6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfh6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfh6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfh6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfh6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfh6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfh6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfh6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:10:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h8fs2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:49 crc kubenswrapper[5118]: I0121 00:10:49.510854 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9sftt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50c45c57-9291-48d3-8022-00a314541104\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"},\\\"containerID\\\":\\\"cri-o://ecf130babd03da45c443de89b17fb55a81f12f8dd6c5980a93e30536346476f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:10:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":1001}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5fh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:10:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9sftt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:49 crc kubenswrapper[5118]: I0121 00:10:49.523100 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96ad38d8-0631-494b-8a0c-73936655173c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://7da13cdac196a74d6f3d3fe06fd8b8f1b93152d831e98ee1b66f4bd30f77756b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://f71ef5ad6b3cecbe91cf0fa1e4e8759ddda878222a1c71e9801313336e424458\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://531e890ac624829dfeab5674374a20bf8f80e96fe3ad6baff6532501d078f297\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://93e48b61d0a2e616f65259ffbca42d9d000600a9f57c456e9fafc249cbbfa187\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:09:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:49 crc kubenswrapper[5118]: I0121 00:10:49.532757 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea90c3b6-90f2-4468-8987-cbc4691535cc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://01a6d01cbabb92bffcca05eb808b4bd0bee991f66f129422707d982e4e3d320f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://37fdbdbec8b545e1b3921af5413cad07f8ffa20745589533bc0fffa6ec9a42fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://0f3214d25bbdd49a8a29ce6f30a600024d862102e53bee5c64ac3f0880d97481\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d4b43bf3081fdf5d3820618be5c2c1f9db8f1b4b344beee6c6fb8f37064a0308\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4b43bf3081fdf5d3820618be5c2c1f9db8f1b4b344beee6c6fb8f37064a0308\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:09:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:09:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:09:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:49 crc kubenswrapper[5118]: I0121 00:10:49.545039 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d00f57d42fb7c4fd9a0c1aae39c2c8e5cd548479e0a5c0d2f264a7df5d4efca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:10:49Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:49 crc kubenswrapper[5118]: I0121 00:10:49.547046 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:49 crc kubenswrapper[5118]: I0121 00:10:49.547076 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:49 crc kubenswrapper[5118]: I0121 00:10:49.547084 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:49 crc kubenswrapper[5118]: I0121 00:10:49.547098 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:49 crc kubenswrapper[5118]: I0121 00:10:49.547107 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:49Z","lastTransitionTime":"2026-01-21T00:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:49 crc kubenswrapper[5118]: I0121 00:10:49.554074 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:49 crc kubenswrapper[5118]: I0121 00:10:49.566323 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-qcqwq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c0390f5-26b4-4299-958c-acac058be619\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:10:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5t5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:10:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qcqwq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:49 crc kubenswrapper[5118]: I0121 00:10:49.588106 5118 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8e2c6b8-bbac-4c8c-98aa-eed95855d358\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T00:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://9d9f0111a3537cc924a7e201bcd1e6a41bc82e79b86ec8f1d33560c518239fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://9c43bace9e1ec4b78fc3886b886cfc9eb9505e5cd415b54a393092a5fb6bfede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://efa0534fa57e4334809de905bc9c6076a74ca99b2829d2716055befea0eb99ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://898860c9529a12085df4c5531acb1bd4f2bf2dc8acc40c795bef9e642ab80c73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://cde9ebfec14b67069eee7df51b0b8e257d4b7ccb5fc744f7cf08722b62167f08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T00:09:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4d69bc807ea39c1b6e472b26c0ed0618f1da285bdc9fd839f01a7c311aa34ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d69bc807ea39c1b6e472b26c0ed0618f1da285bdc9fd839f01a7c311aa34ef4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:09:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:09:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://afa5b4348d36675413cb2b023cbf9c78e23c0ee25d1298fecd6ef727c36d64ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa5b4348d36675413cb2b023cbf9c78e23c0ee25d1298fecd6ef727c36d64ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:09:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:09:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://b755b446899236ed8c24e77229b7d2ca8cec62643e7c60e7d77fd8a26bdea1ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b755b446899236ed8c24e77229b7d2ca8cec62643e7c60e7d77fd8a26bdea1ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T00:09:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T00:09:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T00:09:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 00:10:49 crc kubenswrapper[5118]: I0121 00:10:49.648788 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:49 crc kubenswrapper[5118]: I0121 00:10:49.648832 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:49 crc kubenswrapper[5118]: I0121 00:10:49.648844 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:49 crc kubenswrapper[5118]: I0121 00:10:49.648861 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:49 crc kubenswrapper[5118]: I0121 00:10:49.648872 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:49Z","lastTransitionTime":"2026-01-21T00:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:49 crc kubenswrapper[5118]: I0121 00:10:49.750933 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:49 crc kubenswrapper[5118]: I0121 00:10:49.750975 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:49 crc kubenswrapper[5118]: I0121 00:10:49.750986 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:49 crc kubenswrapper[5118]: I0121 00:10:49.751003 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:49 crc kubenswrapper[5118]: I0121 00:10:49.751016 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:49Z","lastTransitionTime":"2026-01-21T00:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:49 crc kubenswrapper[5118]: I0121 00:10:49.853033 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:49 crc kubenswrapper[5118]: I0121 00:10:49.853067 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:49 crc kubenswrapper[5118]: I0121 00:10:49.853076 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:49 crc kubenswrapper[5118]: I0121 00:10:49.853089 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:49 crc kubenswrapper[5118]: I0121 00:10:49.853098 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:49Z","lastTransitionTime":"2026-01-21T00:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:49 crc kubenswrapper[5118]: I0121 00:10:49.956222 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:49 crc kubenswrapper[5118]: I0121 00:10:49.956440 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:49 crc kubenswrapper[5118]: I0121 00:10:49.956452 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:49 crc kubenswrapper[5118]: I0121 00:10:49.956467 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:49 crc kubenswrapper[5118]: I0121 00:10:49.956479 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:49Z","lastTransitionTime":"2026-01-21T00:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:50 crc kubenswrapper[5118]: I0121 00:10:50.058298 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:50 crc kubenswrapper[5118]: I0121 00:10:50.058347 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:50 crc kubenswrapper[5118]: I0121 00:10:50.058362 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:50 crc kubenswrapper[5118]: I0121 00:10:50.058380 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:50 crc kubenswrapper[5118]: I0121 00:10:50.058392 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:50Z","lastTransitionTime":"2026-01-21T00:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:50 crc kubenswrapper[5118]: I0121 00:10:50.160074 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:50 crc kubenswrapper[5118]: I0121 00:10:50.160110 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:50 crc kubenswrapper[5118]: I0121 00:10:50.160120 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:50 crc kubenswrapper[5118]: I0121 00:10:50.160135 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:50 crc kubenswrapper[5118]: I0121 00:10:50.160143 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:50Z","lastTransitionTime":"2026-01-21T00:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:50 crc kubenswrapper[5118]: I0121 00:10:50.262488 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:50 crc kubenswrapper[5118]: I0121 00:10:50.262826 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:50 crc kubenswrapper[5118]: I0121 00:10:50.262835 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:50 crc kubenswrapper[5118]: I0121 00:10:50.262847 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:50 crc kubenswrapper[5118]: I0121 00:10:50.262856 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:50Z","lastTransitionTime":"2026-01-21T00:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:50 crc kubenswrapper[5118]: I0121 00:10:50.364685 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:50 crc kubenswrapper[5118]: I0121 00:10:50.364756 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:50 crc kubenswrapper[5118]: I0121 00:10:50.364791 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:50 crc kubenswrapper[5118]: I0121 00:10:50.364812 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:50 crc kubenswrapper[5118]: I0121 00:10:50.364824 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:50Z","lastTransitionTime":"2026-01-21T00:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:50 crc kubenswrapper[5118]: I0121 00:10:50.383544 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kzdr6" event={"ID":"ddc3c284-5d85-4e40-b285-f16062ad8d9c","Type":"ContainerStarted","Data":"65392a9879c2e02a5dbcb62a596d34645ad85b9b7b3896c61bca86c58f88d828"} Jan 21 00:10:50 crc kubenswrapper[5118]: I0121 00:10:50.385803 5118 generic.go:358] "Generic (PLEG): container finished" podID="91e46657-55ca-43e7-9a43-6bb875c7debf" containerID="9fd0d8aa06981c2157fb923838a8a8cb7c4ad93337f136546214029480b3dc3e" exitCode=0 Jan 21 00:10:50 crc kubenswrapper[5118]: I0121 00:10:50.385919 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h8fs2" event={"ID":"91e46657-55ca-43e7-9a43-6bb875c7debf","Type":"ContainerDied","Data":"9fd0d8aa06981c2157fb923838a8a8cb7c4ad93337f136546214029480b3dc3e"} Jan 21 00:10:50 crc kubenswrapper[5118]: I0121 00:10:50.390820 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"2d4e8fb016e9db3deea5f1f9381ae324638bb3a056edde031f4f73df0b9c3788"} Jan 21 00:10:50 crc kubenswrapper[5118]: I0121 00:10:50.391066 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"c0e362a420c6b6fb85f3e01b01c6517d80de5cb6e9d298bb4bf4e1dcd55c71c0"} Jan 21 00:10:50 crc kubenswrapper[5118]: I0121 00:10:50.392441 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-qcqwq" event={"ID":"7c0390f5-26b4-4299-958c-acac058be619","Type":"ContainerStarted","Data":"a76c675001b1e3a4e3d344ae261bddc8ead10e9d0619b5012a61c50027134efe"} Jan 21 00:10:50 crc kubenswrapper[5118]: I0121 00:10:50.393991 5118 generic.go:358] "Generic (PLEG): container finished" podID="0541bb33-5d4a-4ef9-964c-884c727499f6" containerID="635b85902a1ff3fc9aefc8126901ef1c3669dfa113b5e1a5898cd6f94b5c36be" exitCode=0 Jan 21 00:10:50 crc kubenswrapper[5118]: I0121 00:10:50.394065 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-d4lsz" event={"ID":"0541bb33-5d4a-4ef9-964c-884c727499f6","Type":"ContainerDied","Data":"635b85902a1ff3fc9aefc8126901ef1c3669dfa113b5e1a5898cd6f94b5c36be"} Jan 21 00:10:50 crc kubenswrapper[5118]: I0121 00:10:50.396746 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" event={"ID":"44eb9bc7-60a3-421c-bf5e-d1d9a5026435","Type":"ContainerStarted","Data":"ebce512679b1ac6a1172cf6df51d1cdffd5fd6e643bd11e70ffe7482570cd359"} Jan 21 00:10:50 crc kubenswrapper[5118]: I0121 00:10:50.396814 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" event={"ID":"44eb9bc7-60a3-421c-bf5e-d1d9a5026435","Type":"ContainerStarted","Data":"3d0e675f334ffded3691cfab55969aa48904594050bf856505571c66ce33624e"} Jan 21 00:10:50 crc kubenswrapper[5118]: I0121 00:10:50.400798 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Jan 21 00:10:50 crc kubenswrapper[5118]: I0121 00:10:50.404109 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"24f0d293f59538f561797000d7d6d3bf5cf65c588d7307bbc362958b5c993376"} Jan 21 00:10:50 crc kubenswrapper[5118]: I0121 00:10:50.404826 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 00:10:50 crc kubenswrapper[5118]: I0121 00:10:50.466971 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:50 crc kubenswrapper[5118]: I0121 00:10:50.467029 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:50 crc kubenswrapper[5118]: I0121 00:10:50.467052 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:50 crc kubenswrapper[5118]: I0121 00:10:50.467081 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:50 crc kubenswrapper[5118]: I0121 00:10:50.467104 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:50Z","lastTransitionTime":"2026-01-21T00:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:50 crc kubenswrapper[5118]: I0121 00:10:50.570341 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:50 crc kubenswrapper[5118]: I0121 00:10:50.570421 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:50 crc kubenswrapper[5118]: I0121 00:10:50.570441 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:50 crc kubenswrapper[5118]: I0121 00:10:50.570464 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:50 crc kubenswrapper[5118]: I0121 00:10:50.570482 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:50Z","lastTransitionTime":"2026-01-21T00:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:50 crc kubenswrapper[5118]: I0121 00:10:50.673074 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:50 crc kubenswrapper[5118]: I0121 00:10:50.673133 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:50 crc kubenswrapper[5118]: I0121 00:10:50.673153 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:50 crc kubenswrapper[5118]: I0121 00:10:50.673232 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:50 crc kubenswrapper[5118]: I0121 00:10:50.673251 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:50Z","lastTransitionTime":"2026-01-21T00:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:50 crc kubenswrapper[5118]: I0121 00:10:50.776349 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:50 crc kubenswrapper[5118]: I0121 00:10:50.776429 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:50 crc kubenswrapper[5118]: I0121 00:10:50.776452 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:50 crc kubenswrapper[5118]: I0121 00:10:50.776482 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:50 crc kubenswrapper[5118]: I0121 00:10:50.776505 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:50Z","lastTransitionTime":"2026-01-21T00:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:50 crc kubenswrapper[5118]: I0121 00:10:50.821343 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 00:10:50 crc kubenswrapper[5118]: I0121 00:10:50.821637 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 00:10:50 crc kubenswrapper[5118]: I0121 00:10:50.821701 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 00:10:50 crc kubenswrapper[5118]: I0121 00:10:50.821798 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 00:10:50 crc kubenswrapper[5118]: I0121 00:10:50.821870 5118 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T00:10:50Z","lastTransitionTime":"2026-01-21T00:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 00:10:50 crc kubenswrapper[5118]: I0121 00:10:50.896012 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=17.895993385 podStartE2EDuration="17.895993385s" podCreationTimestamp="2026-01-21 00:10:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:10:50.895227835 +0000 UTC m=+106.219474893" watchObservedRunningTime="2026-01-21 00:10:50.895993385 +0000 UTC m=+106.220240403" Jan 21 00:10:50 crc kubenswrapper[5118]: I0121 00:10:50.944483 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-7c9b9cfd6-7bp46"] Jan 21 00:10:50 crc kubenswrapper[5118]: I0121 00:10:50.971516 5118 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kubelet-serving" Jan 21 00:10:50 crc kubenswrapper[5118]: I0121 00:10:50.981385 5118 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Jan 21 00:10:51 crc kubenswrapper[5118]: I0121 00:10:51.084883 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=19.08486463 podStartE2EDuration="19.08486463s" podCreationTimestamp="2026-01-21 00:10:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:10:51.083184966 +0000 UTC m=+106.407432004" watchObservedRunningTime="2026-01-21 00:10:51.08486463 +0000 UTC m=+106.409111648" Jan 21 00:10:51 crc kubenswrapper[5118]: I0121 00:10:51.085112 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-9sftt" podStartSLOduration=85.085107927 podStartE2EDuration="1m25.085107927s" podCreationTimestamp="2026-01-21 00:09:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:10:51.065847295 +0000 UTC m=+106.390094313" watchObservedRunningTime="2026-01-21 00:10:51.085107927 +0000 UTC m=+106.409354945" Jan 21 00:10:51 crc kubenswrapper[5118]: I0121 00:10:51.111400 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=18.111384584 podStartE2EDuration="18.111384584s" podCreationTimestamp="2026-01-21 00:10:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:10:51.099570121 +0000 UTC m=+106.423817149" watchObservedRunningTime="2026-01-21 00:10:51.111384584 +0000 UTC m=+106.435631602" Jan 21 00:10:51 crc kubenswrapper[5118]: I0121 00:10:51.161351 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=18.16133227 podStartE2EDuration="18.16133227s" podCreationTimestamp="2026-01-21 00:10:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:10:51.160778696 +0000 UTC m=+106.485025724" watchObservedRunningTime="2026-01-21 00:10:51.16133227 +0000 UTC m=+106.485579288" Jan 21 00:10:51 crc kubenswrapper[5118]: I0121 00:10:51.193525 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kzdr6" podStartSLOduration=85.193508275 podStartE2EDuration="1m25.193508275s" podCreationTimestamp="2026-01-21 00:09:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:10:51.192816666 +0000 UTC m=+106.517063684" watchObservedRunningTime="2026-01-21 00:10:51.193508275 +0000 UTC m=+106.517755313" Jan 21 00:10:51 crc kubenswrapper[5118]: I0121 00:10:51.205575 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-znhzw" podStartSLOduration=85.205558115 podStartE2EDuration="1m25.205558115s" podCreationTimestamp="2026-01-21 00:09:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:10:51.204939668 +0000 UTC m=+106.529186706" watchObservedRunningTime="2026-01-21 00:10:51.205558115 +0000 UTC m=+106.529805133" Jan 21 00:10:51 crc kubenswrapper[5118]: I0121 00:10:51.244922 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=19.244897209 podStartE2EDuration="19.244897209s" podCreationTimestamp="2026-01-21 00:10:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:10:51.244811067 +0000 UTC m=+106.569058105" watchObservedRunningTime="2026-01-21 00:10:51.244897209 +0000 UTC m=+106.569144247" Jan 21 00:10:51 crc kubenswrapper[5118]: I0121 00:10:51.291701 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podStartSLOduration=85.291684661 podStartE2EDuration="1m25.291684661s" podCreationTimestamp="2026-01-21 00:09:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:10:51.291187638 +0000 UTC m=+106.615434676" watchObservedRunningTime="2026-01-21 00:10:51.291684661 +0000 UTC m=+106.615931689" Jan 21 00:10:51 crc kubenswrapper[5118]: I0121 00:10:51.292588 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-qcqwq" podStartSLOduration=85.292581195 podStartE2EDuration="1m25.292581195s" podCreationTimestamp="2026-01-21 00:09:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:10:51.277073603 +0000 UTC m=+106.601320631" watchObservedRunningTime="2026-01-21 00:10:51.292581195 +0000 UTC m=+106.616828223" Jan 21 00:10:53 crc kubenswrapper[5118]: I0121 00:10:53.334229 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 21 00:10:53 crc kubenswrapper[5118]: E0121 00:10:53.334662 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 21 00:10:53 crc kubenswrapper[5118]: I0121 00:10:53.334791 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 00:10:53 crc kubenswrapper[5118]: E0121 00:10:53.334962 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 21 00:10:53 crc kubenswrapper[5118]: I0121 00:10:53.334512 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 21 00:10:53 crc kubenswrapper[5118]: I0121 00:10:53.335330 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9hvtf" Jan 21 00:10:53 crc kubenswrapper[5118]: E0121 00:10:53.335430 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 21 00:10:53 crc kubenswrapper[5118]: E0121 00:10:53.336036 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9hvtf" podUID="21105fbf-0225-4ba6-ba90-17808d5250c6" Jan 21 00:10:53 crc kubenswrapper[5118]: I0121 00:10:53.336187 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-7bp46" Jan 21 00:10:53 crc kubenswrapper[5118]: I0121 00:10:53.338677 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\"" Jan 21 00:10:53 crc kubenswrapper[5118]: I0121 00:10:53.339130 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\"" Jan 21 00:10:53 crc kubenswrapper[5118]: I0121 00:10:53.339338 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"default-dockercfg-hqpm5\"" Jan 21 00:10:53 crc kubenswrapper[5118]: I0121 00:10:53.342047 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\"" Jan 21 00:10:53 crc kubenswrapper[5118]: I0121 00:10:53.373682 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/552a6fc1-1fa0-41e7-ab66-5ceb642813f7-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-7bp46\" (UID: \"552a6fc1-1fa0-41e7-ab66-5ceb642813f7\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-7bp46" Jan 21 00:10:53 crc kubenswrapper[5118]: I0121 00:10:53.373813 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/552a6fc1-1fa0-41e7-ab66-5ceb642813f7-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-7bp46\" (UID: \"552a6fc1-1fa0-41e7-ab66-5ceb642813f7\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-7bp46" Jan 21 00:10:53 crc kubenswrapper[5118]: I0121 00:10:53.373840 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/552a6fc1-1fa0-41e7-ab66-5ceb642813f7-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-7bp46\" (UID: \"552a6fc1-1fa0-41e7-ab66-5ceb642813f7\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-7bp46" Jan 21 00:10:53 crc kubenswrapper[5118]: I0121 00:10:53.373922 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/552a6fc1-1fa0-41e7-ab66-5ceb642813f7-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-7bp46\" (UID: \"552a6fc1-1fa0-41e7-ab66-5ceb642813f7\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-7bp46" Jan 21 00:10:53 crc kubenswrapper[5118]: I0121 00:10:53.373940 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/552a6fc1-1fa0-41e7-ab66-5ceb642813f7-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-7bp46\" (UID: \"552a6fc1-1fa0-41e7-ab66-5ceb642813f7\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-7bp46" Jan 21 00:10:53 crc kubenswrapper[5118]: I0121 00:10:53.416076 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" event={"ID":"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc","Type":"ContainerStarted","Data":"51a0cef8c2f95ab4c597dcfa7c94690c6c5d7a3e2bbfd59c6c7357609c2e960c"} Jan 21 00:10:53 crc kubenswrapper[5118]: I0121 00:10:53.474599 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/552a6fc1-1fa0-41e7-ab66-5ceb642813f7-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-7bp46\" (UID: \"552a6fc1-1fa0-41e7-ab66-5ceb642813f7\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-7bp46" Jan 21 00:10:53 crc kubenswrapper[5118]: I0121 00:10:53.474735 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/552a6fc1-1fa0-41e7-ab66-5ceb642813f7-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-7bp46\" (UID: \"552a6fc1-1fa0-41e7-ab66-5ceb642813f7\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-7bp46" Jan 21 00:10:53 crc kubenswrapper[5118]: I0121 00:10:53.474923 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/552a6fc1-1fa0-41e7-ab66-5ceb642813f7-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-7bp46\" (UID: \"552a6fc1-1fa0-41e7-ab66-5ceb642813f7\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-7bp46" Jan 21 00:10:53 crc kubenswrapper[5118]: I0121 00:10:53.475033 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/552a6fc1-1fa0-41e7-ab66-5ceb642813f7-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-7bp46\" (UID: \"552a6fc1-1fa0-41e7-ab66-5ceb642813f7\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-7bp46" Jan 21 00:10:53 crc kubenswrapper[5118]: I0121 00:10:53.475052 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/552a6fc1-1fa0-41e7-ab66-5ceb642813f7-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-7bp46\" (UID: \"552a6fc1-1fa0-41e7-ab66-5ceb642813f7\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-7bp46" Jan 21 00:10:53 crc kubenswrapper[5118]: I0121 00:10:53.475122 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/552a6fc1-1fa0-41e7-ab66-5ceb642813f7-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-7bp46\" (UID: \"552a6fc1-1fa0-41e7-ab66-5ceb642813f7\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-7bp46" Jan 21 00:10:53 crc kubenswrapper[5118]: I0121 00:10:53.475444 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/552a6fc1-1fa0-41e7-ab66-5ceb642813f7-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-7bp46\" (UID: \"552a6fc1-1fa0-41e7-ab66-5ceb642813f7\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-7bp46" Jan 21 00:10:53 crc kubenswrapper[5118]: I0121 00:10:53.475737 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/552a6fc1-1fa0-41e7-ab66-5ceb642813f7-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-7bp46\" (UID: \"552a6fc1-1fa0-41e7-ab66-5ceb642813f7\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-7bp46" Jan 21 00:10:53 crc kubenswrapper[5118]: I0121 00:10:53.489346 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/552a6fc1-1fa0-41e7-ab66-5ceb642813f7-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-7bp46\" (UID: \"552a6fc1-1fa0-41e7-ab66-5ceb642813f7\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-7bp46" Jan 21 00:10:53 crc kubenswrapper[5118]: I0121 00:10:53.494848 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/552a6fc1-1fa0-41e7-ab66-5ceb642813f7-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-7bp46\" (UID: \"552a6fc1-1fa0-41e7-ab66-5ceb642813f7\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-7bp46" Jan 21 00:10:53 crc kubenswrapper[5118]: I0121 00:10:53.653401 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-7bp46" Jan 21 00:10:53 crc kubenswrapper[5118]: W0121 00:10:53.667044 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod552a6fc1_1fa0_41e7_ab66_5ceb642813f7.slice/crio-b761b176346ffee83d317b4587b1d1a73c772e36d88dc44f0ab51fc8a418ff10 WatchSource:0}: Error finding container b761b176346ffee83d317b4587b1d1a73c772e36d88dc44f0ab51fc8a418ff10: Status 404 returned error can't find the container with id b761b176346ffee83d317b4587b1d1a73c772e36d88dc44f0ab51fc8a418ff10 Jan 21 00:10:54 crc kubenswrapper[5118]: I0121 00:10:54.424034 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h8fs2" event={"ID":"91e46657-55ca-43e7-9a43-6bb875c7debf","Type":"ContainerStarted","Data":"e0443ae711ad7be79ef581edaa3a7e5dc22631f021e1f4072d61e2705eaa2f1e"} Jan 21 00:10:54 crc kubenswrapper[5118]: I0121 00:10:54.424098 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h8fs2" event={"ID":"91e46657-55ca-43e7-9a43-6bb875c7debf","Type":"ContainerStarted","Data":"1d172206bbe6a689c32082da3fb77a7d564d2a522a9cc4b7d70193aebfc326a1"} Jan 21 00:10:54 crc kubenswrapper[5118]: I0121 00:10:54.424116 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h8fs2" event={"ID":"91e46657-55ca-43e7-9a43-6bb875c7debf","Type":"ContainerStarted","Data":"6cec5a44434a07153f1d3e9910d8fd3d9a2ec37767604df403adafe3ad6eaf2f"} Jan 21 00:10:54 crc kubenswrapper[5118]: I0121 00:10:54.424130 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h8fs2" event={"ID":"91e46657-55ca-43e7-9a43-6bb875c7debf","Type":"ContainerStarted","Data":"44654afa2c6f84f6cc69686c1499e120914b6742b725292990c497aa516ae5e3"} Jan 21 00:10:54 crc kubenswrapper[5118]: I0121 00:10:54.424141 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h8fs2" event={"ID":"91e46657-55ca-43e7-9a43-6bb875c7debf","Type":"ContainerStarted","Data":"5bb9b3c5f6f84029c1079215b2b2e783858fbe9d0ffff553ce9c8357e054c961"} Jan 21 00:10:54 crc kubenswrapper[5118]: I0121 00:10:54.424151 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h8fs2" event={"ID":"91e46657-55ca-43e7-9a43-6bb875c7debf","Type":"ContainerStarted","Data":"3752a9e4c7c738fecf3e21570f930846192609fc9b64b76e9895d30662f15407"} Jan 21 00:10:54 crc kubenswrapper[5118]: I0121 00:10:54.425899 5118 generic.go:358] "Generic (PLEG): container finished" podID="0541bb33-5d4a-4ef9-964c-884c727499f6" containerID="87232178bad213e50191b15a1a144677b0383071aff1a36de8e0f096e50344d8" exitCode=0 Jan 21 00:10:54 crc kubenswrapper[5118]: I0121 00:10:54.425947 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-d4lsz" event={"ID":"0541bb33-5d4a-4ef9-964c-884c727499f6","Type":"ContainerDied","Data":"87232178bad213e50191b15a1a144677b0383071aff1a36de8e0f096e50344d8"} Jan 21 00:10:54 crc kubenswrapper[5118]: I0121 00:10:54.427294 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-7bp46" event={"ID":"552a6fc1-1fa0-41e7-ab66-5ceb642813f7","Type":"ContainerStarted","Data":"8dbf2fd9cfc6d732eacaa1d41e33221e22ee5ed37bde5bea77613d50038c9e8e"} Jan 21 00:10:54 crc kubenswrapper[5118]: I0121 00:10:54.427345 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-7bp46" event={"ID":"552a6fc1-1fa0-41e7-ab66-5ceb642813f7","Type":"ContainerStarted","Data":"b761b176346ffee83d317b4587b1d1a73c772e36d88dc44f0ab51fc8a418ff10"} Jan 21 00:10:54 crc kubenswrapper[5118]: I0121 00:10:54.469600 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-7bp46" podStartSLOduration=88.469582236 podStartE2EDuration="1m28.469582236s" podCreationTimestamp="2026-01-21 00:09:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:10:54.469190116 +0000 UTC m=+109.793437144" watchObservedRunningTime="2026-01-21 00:10:54.469582236 +0000 UTC m=+109.793829254" Jan 21 00:10:54 crc kubenswrapper[5118]: I0121 00:10:54.980661 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 21 00:10:54 crc kubenswrapper[5118]: I0121 00:10:54.980913 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 21 00:10:54 crc kubenswrapper[5118]: E0121 00:10:54.981052 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 21 00:10:54 crc kubenswrapper[5118]: E0121 00:10:54.981093 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 21 00:10:54 crc kubenswrapper[5118]: I0121 00:10:54.981542 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 00:10:54 crc kubenswrapper[5118]: I0121 00:10:54.981730 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9hvtf" Jan 21 00:10:54 crc kubenswrapper[5118]: E0121 00:10:54.982860 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 21 00:10:54 crc kubenswrapper[5118]: E0121 00:10:54.983000 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9hvtf" podUID="21105fbf-0225-4ba6-ba90-17808d5250c6" Jan 21 00:10:56 crc kubenswrapper[5118]: I0121 00:10:56.975457 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 21 00:10:56 crc kubenswrapper[5118]: I0121 00:10:56.975629 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 00:10:56 crc kubenswrapper[5118]: E0121 00:10:56.975947 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 21 00:10:56 crc kubenswrapper[5118]: I0121 00:10:56.975699 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9hvtf" Jan 21 00:10:56 crc kubenswrapper[5118]: E0121 00:10:56.976066 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 21 00:10:56 crc kubenswrapper[5118]: E0121 00:10:56.976078 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9hvtf" podUID="21105fbf-0225-4ba6-ba90-17808d5250c6" Jan 21 00:10:56 crc kubenswrapper[5118]: I0121 00:10:56.975660 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 21 00:10:56 crc kubenswrapper[5118]: E0121 00:10:56.976230 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 21 00:10:57 crc kubenswrapper[5118]: I0121 00:10:57.439144 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h8fs2" event={"ID":"91e46657-55ca-43e7-9a43-6bb875c7debf","Type":"ContainerStarted","Data":"47304a559d2c42615011fabfa78530d881d2f5b7e75c9f0d869341e77027c9d9"} Jan 21 00:10:57 crc kubenswrapper[5118]: I0121 00:10:57.441967 5118 generic.go:358] "Generic (PLEG): container finished" podID="0541bb33-5d4a-4ef9-964c-884c727499f6" containerID="2744f0406a628234c94f352a8b1c3c7a14400fb1ad29988904b18daaacfe659c" exitCode=0 Jan 21 00:10:57 crc kubenswrapper[5118]: I0121 00:10:57.442079 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-d4lsz" event={"ID":"0541bb33-5d4a-4ef9-964c-884c727499f6","Type":"ContainerDied","Data":"2744f0406a628234c94f352a8b1c3c7a14400fb1ad29988904b18daaacfe659c"} Jan 21 00:10:58 crc kubenswrapper[5118]: I0121 00:10:58.448927 5118 generic.go:358] "Generic (PLEG): container finished" podID="0541bb33-5d4a-4ef9-964c-884c727499f6" containerID="45c0702b3f152cb0ea715859fadb9179e442e9707e0f0418a77718fc861ba823" exitCode=0 Jan 21 00:10:58 crc kubenswrapper[5118]: I0121 00:10:58.449009 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-d4lsz" event={"ID":"0541bb33-5d4a-4ef9-964c-884c727499f6","Type":"ContainerDied","Data":"45c0702b3f152cb0ea715859fadb9179e442e9707e0f0418a77718fc861ba823"} Jan 21 00:10:58 crc kubenswrapper[5118]: I0121 00:10:58.975375 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 21 00:10:58 crc kubenswrapper[5118]: I0121 00:10:58.975483 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 00:10:58 crc kubenswrapper[5118]: I0121 00:10:58.975376 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9hvtf" Jan 21 00:10:58 crc kubenswrapper[5118]: E0121 00:10:58.975569 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 21 00:10:58 crc kubenswrapper[5118]: E0121 00:10:58.975485 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 21 00:10:58 crc kubenswrapper[5118]: E0121 00:10:58.975652 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9hvtf" podUID="21105fbf-0225-4ba6-ba90-17808d5250c6" Jan 21 00:10:58 crc kubenswrapper[5118]: I0121 00:10:58.975682 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 21 00:10:58 crc kubenswrapper[5118]: E0121 00:10:58.975742 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 21 00:10:59 crc kubenswrapper[5118]: I0121 00:10:59.455020 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h8fs2" event={"ID":"91e46657-55ca-43e7-9a43-6bb875c7debf","Type":"ContainerStarted","Data":"cd6b0e74901b77629ea9cf2a9f6ed14b9dadab5da55771b2c4329c542b5dde1a"} Jan 21 00:10:59 crc kubenswrapper[5118]: I0121 00:10:59.455712 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-h8fs2" Jan 21 00:10:59 crc kubenswrapper[5118]: I0121 00:10:59.455747 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-h8fs2" Jan 21 00:10:59 crc kubenswrapper[5118]: I0121 00:10:59.455757 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-h8fs2" Jan 21 00:10:59 crc kubenswrapper[5118]: I0121 00:10:59.466189 5118 generic.go:358] "Generic (PLEG): container finished" podID="0541bb33-5d4a-4ef9-964c-884c727499f6" containerID="d46a19cccd53e6a6f4c5409f1dc44cfa58b6704802c52b0cc5c93bf9440fa50f" exitCode=0 Jan 21 00:10:59 crc kubenswrapper[5118]: I0121 00:10:59.466232 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-d4lsz" event={"ID":"0541bb33-5d4a-4ef9-964c-884c727499f6","Type":"ContainerDied","Data":"d46a19cccd53e6a6f4c5409f1dc44cfa58b6704802c52b0cc5c93bf9440fa50f"} Jan 21 00:10:59 crc kubenswrapper[5118]: I0121 00:10:59.479017 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-h8fs2" Jan 21 00:10:59 crc kubenswrapper[5118]: I0121 00:10:59.489436 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-h8fs2" Jan 21 00:10:59 crc kubenswrapper[5118]: I0121 00:10:59.505982 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-h8fs2" podStartSLOduration=93.505965683 podStartE2EDuration="1m33.505965683s" podCreationTimestamp="2026-01-21 00:09:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:10:59.483019944 +0000 UTC m=+114.807266982" watchObservedRunningTime="2026-01-21 00:10:59.505965683 +0000 UTC m=+114.830212701" Jan 21 00:11:00 crc kubenswrapper[5118]: I0121 00:11:00.475934 5118 generic.go:358] "Generic (PLEG): container finished" podID="0541bb33-5d4a-4ef9-964c-884c727499f6" containerID="7bc325ec49dca2846a361184e6811873d608577a261c6cff4a7920c4cc3811df" exitCode=0 Jan 21 00:11:00 crc kubenswrapper[5118]: I0121 00:11:00.476030 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-d4lsz" event={"ID":"0541bb33-5d4a-4ef9-964c-884c727499f6","Type":"ContainerDied","Data":"7bc325ec49dca2846a361184e6811873d608577a261c6cff4a7920c4cc3811df"} Jan 21 00:11:00 crc kubenswrapper[5118]: I0121 00:11:00.975365 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 21 00:11:00 crc kubenswrapper[5118]: E0121 00:11:00.975496 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 21 00:11:00 crc kubenswrapper[5118]: I0121 00:11:00.975527 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9hvtf" Jan 21 00:11:00 crc kubenswrapper[5118]: I0121 00:11:00.975614 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 00:11:00 crc kubenswrapper[5118]: E0121 00:11:00.975777 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 21 00:11:00 crc kubenswrapper[5118]: E0121 00:11:00.975924 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9hvtf" podUID="21105fbf-0225-4ba6-ba90-17808d5250c6" Jan 21 00:11:00 crc kubenswrapper[5118]: I0121 00:11:00.976077 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 21 00:11:00 crc kubenswrapper[5118]: E0121 00:11:00.976256 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 21 00:11:01 crc kubenswrapper[5118]: I0121 00:11:01.483149 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-d4lsz" event={"ID":"0541bb33-5d4a-4ef9-964c-884c727499f6","Type":"ContainerStarted","Data":"4a8dc9e85fa7f5cba33b0bb718a52f35f81fdc84cd96083c45a147ec57e17690"} Jan 21 00:11:01 crc kubenswrapper[5118]: I0121 00:11:01.512007 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-d4lsz" podStartSLOduration=95.511985563 podStartE2EDuration="1m35.511985563s" podCreationTimestamp="2026-01-21 00:09:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:11:01.511529591 +0000 UTC m=+116.835776649" watchObservedRunningTime="2026-01-21 00:11:01.511985563 +0000 UTC m=+116.836232591" Jan 21 00:11:01 crc kubenswrapper[5118]: I0121 00:11:01.627200 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-9hvtf"] Jan 21 00:11:01 crc kubenswrapper[5118]: I0121 00:11:01.627328 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9hvtf" Jan 21 00:11:01 crc kubenswrapper[5118]: E0121 00:11:01.627417 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9hvtf" podUID="21105fbf-0225-4ba6-ba90-17808d5250c6" Jan 21 00:11:02 crc kubenswrapper[5118]: I0121 00:11:02.974970 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 21 00:11:02 crc kubenswrapper[5118]: I0121 00:11:02.974995 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 00:11:02 crc kubenswrapper[5118]: E0121 00:11:02.975128 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 21 00:11:02 crc kubenswrapper[5118]: I0121 00:11:02.975153 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 21 00:11:02 crc kubenswrapper[5118]: E0121 00:11:02.975295 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 21 00:11:02 crc kubenswrapper[5118]: E0121 00:11:02.975377 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 21 00:11:03 crc kubenswrapper[5118]: I0121 00:11:03.341422 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 00:11:03 crc kubenswrapper[5118]: I0121 00:11:03.975637 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9hvtf" Jan 21 00:11:03 crc kubenswrapper[5118]: E0121 00:11:03.975905 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9hvtf" podUID="21105fbf-0225-4ba6-ba90-17808d5250c6" Jan 21 00:11:04 crc kubenswrapper[5118]: I0121 00:11:04.811639 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 00:11:04 crc kubenswrapper[5118]: I0121 00:11:04.811740 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 00:11:04 crc kubenswrapper[5118]: E0121 00:11:04.811873 5118 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 00:11:04 crc kubenswrapper[5118]: E0121 00:11:04.811952 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-21 00:11:36.811927567 +0000 UTC m=+152.136174595 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 00:11:04 crc kubenswrapper[5118]: E0121 00:11:04.812025 5118 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 00:11:04 crc kubenswrapper[5118]: E0121 00:11:04.812209 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-21 00:11:36.812137202 +0000 UTC m=+152.136384260 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 00:11:04 crc kubenswrapper[5118]: I0121 00:11:04.912833 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 21 00:11:04 crc kubenswrapper[5118]: I0121 00:11:04.912902 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 21 00:11:04 crc kubenswrapper[5118]: E0121 00:11:04.913055 5118 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 00:11:04 crc kubenswrapper[5118]: E0121 00:11:04.913086 5118 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 00:11:04 crc kubenswrapper[5118]: E0121 00:11:04.913097 5118 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 00:11:04 crc kubenswrapper[5118]: E0121 00:11:04.913119 5118 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 00:11:04 crc kubenswrapper[5118]: E0121 00:11:04.913147 5118 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 00:11:04 crc kubenswrapper[5118]: E0121 00:11:04.913200 5118 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 00:11:04 crc kubenswrapper[5118]: E0121 00:11:04.913177 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-21 00:11:36.913143714 +0000 UTC m=+152.237390732 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 00:11:04 crc kubenswrapper[5118]: E0121 00:11:04.913300 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-21 00:11:36.913275498 +0000 UTC m=+152.237522556 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 00:11:04 crc kubenswrapper[5118]: E0121 00:11:04.950234 5118 kubelet_node_status.go:509] "Node not becoming ready in time after startup" Jan 21 00:11:04 crc kubenswrapper[5118]: I0121 00:11:04.976679 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 00:11:04 crc kubenswrapper[5118]: E0121 00:11:04.976771 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 21 00:11:04 crc kubenswrapper[5118]: I0121 00:11:04.976800 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 21 00:11:04 crc kubenswrapper[5118]: E0121 00:11:04.976923 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 21 00:11:04 crc kubenswrapper[5118]: I0121 00:11:04.977003 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 21 00:11:04 crc kubenswrapper[5118]: E0121 00:11:04.977051 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 21 00:11:05 crc kubenswrapper[5118]: E0121 00:11:05.045011 5118 kubelet.go:3132] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 21 00:11:05 crc kubenswrapper[5118]: I0121 00:11:05.520521 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 00:11:05 crc kubenswrapper[5118]: I0121 00:11:05.520679 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/21105fbf-0225-4ba6-ba90-17808d5250c6-metrics-certs\") pod \"network-metrics-daemon-9hvtf\" (UID: \"21105fbf-0225-4ba6-ba90-17808d5250c6\") " pod="openshift-multus/network-metrics-daemon-9hvtf" Jan 21 00:11:05 crc kubenswrapper[5118]: E0121 00:11:05.520740 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 00:11:37.520691135 +0000 UTC m=+152.844938163 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:05 crc kubenswrapper[5118]: E0121 00:11:05.520828 5118 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 00:11:05 crc kubenswrapper[5118]: E0121 00:11:05.520933 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/21105fbf-0225-4ba6-ba90-17808d5250c6-metrics-certs podName:21105fbf-0225-4ba6-ba90-17808d5250c6 nodeName:}" failed. No retries permitted until 2026-01-21 00:11:37.520910091 +0000 UTC m=+152.845157119 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/21105fbf-0225-4ba6-ba90-17808d5250c6-metrics-certs") pod "network-metrics-daemon-9hvtf" (UID: "21105fbf-0225-4ba6-ba90-17808d5250c6") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 00:11:05 crc kubenswrapper[5118]: I0121 00:11:05.975043 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9hvtf" Jan 21 00:11:05 crc kubenswrapper[5118]: E0121 00:11:05.975232 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9hvtf" podUID="21105fbf-0225-4ba6-ba90-17808d5250c6" Jan 21 00:11:06 crc kubenswrapper[5118]: I0121 00:11:06.974902 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 00:11:06 crc kubenswrapper[5118]: E0121 00:11:06.975094 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 21 00:11:06 crc kubenswrapper[5118]: I0121 00:11:06.975214 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 21 00:11:06 crc kubenswrapper[5118]: E0121 00:11:06.975597 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 21 00:11:06 crc kubenswrapper[5118]: I0121 00:11:06.975694 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 21 00:11:06 crc kubenswrapper[5118]: E0121 00:11:06.975807 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 21 00:11:07 crc kubenswrapper[5118]: I0121 00:11:07.974769 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9hvtf" Jan 21 00:11:07 crc kubenswrapper[5118]: E0121 00:11:07.974975 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9hvtf" podUID="21105fbf-0225-4ba6-ba90-17808d5250c6" Jan 21 00:11:08 crc kubenswrapper[5118]: I0121 00:11:08.975126 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 00:11:08 crc kubenswrapper[5118]: I0121 00:11:08.975147 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 21 00:11:08 crc kubenswrapper[5118]: E0121 00:11:08.975306 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 21 00:11:08 crc kubenswrapper[5118]: I0121 00:11:08.975417 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 21 00:11:08 crc kubenswrapper[5118]: E0121 00:11:08.976230 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 21 00:11:08 crc kubenswrapper[5118]: E0121 00:11:08.976360 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 21 00:11:09 crc kubenswrapper[5118]: I0121 00:11:09.974721 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9hvtf" Jan 21 00:11:09 crc kubenswrapper[5118]: E0121 00:11:09.974915 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9hvtf" podUID="21105fbf-0225-4ba6-ba90-17808d5250c6" Jan 21 00:11:10 crc kubenswrapper[5118]: I0121 00:11:10.975524 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 00:11:10 crc kubenswrapper[5118]: I0121 00:11:10.975525 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 21 00:11:10 crc kubenswrapper[5118]: I0121 00:11:10.975537 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 21 00:11:10 crc kubenswrapper[5118]: I0121 00:11:10.977797 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Jan 21 00:11:10 crc kubenswrapper[5118]: I0121 00:11:10.978526 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Jan 21 00:11:10 crc kubenswrapper[5118]: I0121 00:11:10.978552 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Jan 21 00:11:10 crc kubenswrapper[5118]: I0121 00:11:10.978789 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Jan 21 00:11:11 crc kubenswrapper[5118]: I0121 00:11:11.406600 5118 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeReady" Jan 21 00:11:11 crc kubenswrapper[5118]: I0121 00:11:11.456487 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-trbkq"] Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.196655 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-r2gm9"] Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.196895 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-trbkq" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.203100 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.203373 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.203632 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.209412 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.209704 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.210310 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.219016 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.228413 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-5gv2n"] Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.231723 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-s55xm"] Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.232413 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-755bb95488-r2gm9" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.232713 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-5gv2n" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.235139 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.238413 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-5zl5v"] Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.238523 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9hvtf" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.238585 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-s55xm" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.242816 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-mn7c4"] Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.243540 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-5zl5v" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.243906 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-tls\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.244589 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.245462 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-7lpxz"] Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.248316 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-rbac-proxy\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.248501 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.248561 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.248611 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.248630 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"audit\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.248733 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.248799 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-c5g8n"] Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.248826 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.248994 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-mn7c4" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.249035 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9ddfb9f55-7lpxz" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.249069 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.249370 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.249010 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-session\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.249563 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.249718 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.249755 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-images\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.249574 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.249953 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.250482 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.250483 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.250742 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.260987 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.264059 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"serving-cert\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.266770 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-c5g8n" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.267050 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-54c688565-cnnpj"] Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.275342 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.275448 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.275633 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"service-ca-bundle\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.275668 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.275924 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.277720 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-pruner-29482560-n7qwb"] Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.278334 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-54c688565-cnnpj" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.280368 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"image-import-ca\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.281656 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.281899 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.281935 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.282007 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.282020 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"encryption-config-1\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.282077 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.282145 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"config\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.282219 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.282266 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.282281 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-config\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.282417 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"etcd-client\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.282467 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"audit-1\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.282582 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.283367 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"kube-root-ca.crt\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.285821 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.287617 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-67c89758df-5ds28"] Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.288315 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29482560-n7qwb" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.288908 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.289136 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"etcd-serving-ca\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.290740 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-8t6fc"] Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.292574 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.292625 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.292854 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.293215 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.294699 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"pruner-dockercfg-rs58m\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.295039 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.295048 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"serviceca\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.295257 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.295375 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-6d476"] Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.295509 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-4zqgh\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.296235 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"trusted-ca-bundle\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.296380 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-67c89758df-5ds28" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.296494 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.296638 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.296385 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.296816 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"serving-cert\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.296835 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.296998 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.297390 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.298741 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/console-64d44f6ddf-xbtg4"] Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.299724 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-6d476" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.300892 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6796e6ff-3d28-4061-a0b4-cd8088da6919-encryption-config\") pod \"apiserver-9ddfb9f55-7lpxz\" (UID: \"6796e6ff-3d28-4061-a0b4-cd8088da6919\") " pod="openshift-apiserver/apiserver-9ddfb9f55-7lpxz" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.300936 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c1f2bd9d-a01b-4672-b4b1-f88057b52f08-client-ca\") pod \"controller-manager-65b6cccf98-trbkq\" (UID: \"c1f2bd9d-a01b-4672-b4b1-f88057b52f08\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-trbkq" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.300961 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/79567875-e72c-4685-8919-03cda9a6f644-serving-cert\") pod \"authentication-operator-7f5c659b84-5zl5v\" (UID: \"79567875-e72c-4685-8919-03cda9a6f644\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-5zl5v" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.300986 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/19280e75-8f04-47d1-bc42-124082dfd247-audit-policies\") pod \"oauth-openshift-66458b6674-5gv2n\" (UID: \"19280e75-8f04-47d1-bc42-124082dfd247\") " pod="openshift-authentication/oauth-openshift-66458b6674-5gv2n" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.301027 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/19280e75-8f04-47d1-bc42-124082dfd247-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-5gv2n\" (UID: \"19280e75-8f04-47d1-bc42-124082dfd247\") " pod="openshift-authentication/oauth-openshift-66458b6674-5gv2n" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.301128 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/19280e75-8f04-47d1-bc42-124082dfd247-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-5gv2n\" (UID: \"19280e75-8f04-47d1-bc42-124082dfd247\") " pod="openshift-authentication/oauth-openshift-66458b6674-5gv2n" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.301194 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1968a714-512b-40f9-a302-f8905b0855fd-config\") pod \"route-controller-manager-776cdc94d6-s55xm\" (UID: \"1968a714-512b-40f9-a302-f8905b0855fd\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-s55xm" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.301236 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4580359c-1bab-4bde-a783-bc3866e460a0-config\") pod \"openshift-apiserver-operator-846cbfc458-c5g8n\" (UID: \"4580359c-1bab-4bde-a783-bc3866e460a0\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-c5g8n" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.301271 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgpcn\" (UniqueName: \"kubernetes.io/projected/1202d380-a207-455c-8bd8-2b82e7974afa-kube-api-access-rgpcn\") pod \"machine-api-operator-755bb95488-r2gm9\" (UID: \"1202d380-a207-455c-8bd8-2b82e7974afa\") " pod="openshift-machine-api/machine-api-operator-755bb95488-r2gm9" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.301304 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6796e6ff-3d28-4061-a0b4-cd8088da6919-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-7lpxz\" (UID: \"6796e6ff-3d28-4061-a0b4-cd8088da6919\") " pod="openshift-apiserver/apiserver-9ddfb9f55-7lpxz" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.301335 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79567875-e72c-4685-8919-03cda9a6f644-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-5zl5v\" (UID: \"79567875-e72c-4685-8919-03cda9a6f644\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-5zl5v" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.301400 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25pkl\" (UniqueName: \"kubernetes.io/projected/79567875-e72c-4685-8919-03cda9a6f644-kube-api-access-25pkl\") pod \"authentication-operator-7f5c659b84-5zl5v\" (UID: \"79567875-e72c-4685-8919-03cda9a6f644\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-5zl5v" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.301435 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6796e6ff-3d28-4061-a0b4-cd8088da6919-etcd-client\") pod \"apiserver-9ddfb9f55-7lpxz\" (UID: \"6796e6ff-3d28-4061-a0b4-cd8088da6919\") " pod="openshift-apiserver/apiserver-9ddfb9f55-7lpxz" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.301467 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6796e6ff-3d28-4061-a0b4-cd8088da6919-serving-cert\") pod \"apiserver-9ddfb9f55-7lpxz\" (UID: \"6796e6ff-3d28-4061-a0b4-cd8088da6919\") " pod="openshift-apiserver/apiserver-9ddfb9f55-7lpxz" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.301497 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xz6wz\" (UniqueName: \"kubernetes.io/projected/19280e75-8f04-47d1-bc42-124082dfd247-kube-api-access-xz6wz\") pod \"oauth-openshift-66458b6674-5gv2n\" (UID: \"19280e75-8f04-47d1-bc42-124082dfd247\") " pod="openshift-authentication/oauth-openshift-66458b6674-5gv2n" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.301529 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jct6m\" (UniqueName: \"kubernetes.io/projected/1968a714-512b-40f9-a302-f8905b0855fd-kube-api-access-jct6m\") pod \"route-controller-manager-776cdc94d6-s55xm\" (UID: \"1968a714-512b-40f9-a302-f8905b0855fd\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-s55xm" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.301560 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79567875-e72c-4685-8919-03cda9a6f644-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-5zl5v\" (UID: \"79567875-e72c-4685-8919-03cda9a6f644\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-5zl5v" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.301608 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/19280e75-8f04-47d1-bc42-124082dfd247-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-5gv2n\" (UID: \"19280e75-8f04-47d1-bc42-124082dfd247\") " pod="openshift-authentication/oauth-openshift-66458b6674-5gv2n" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.301644 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/19280e75-8f04-47d1-bc42-124082dfd247-audit-dir\") pod \"oauth-openshift-66458b6674-5gv2n\" (UID: \"19280e75-8f04-47d1-bc42-124082dfd247\") " pod="openshift-authentication/oauth-openshift-66458b6674-5gv2n" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.301672 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1968a714-512b-40f9-a302-f8905b0855fd-tmp\") pod \"route-controller-manager-776cdc94d6-s55xm\" (UID: \"1968a714-512b-40f9-a302-f8905b0855fd\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-s55xm" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.301726 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6796e6ff-3d28-4061-a0b4-cd8088da6919-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-7lpxz\" (UID: \"6796e6ff-3d28-4061-a0b4-cd8088da6919\") " pod="openshift-apiserver/apiserver-9ddfb9f55-7lpxz" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.301772 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6796e6ff-3d28-4061-a0b4-cd8088da6919-config\") pod \"apiserver-9ddfb9f55-7lpxz\" (UID: \"6796e6ff-3d28-4061-a0b4-cd8088da6919\") " pod="openshift-apiserver/apiserver-9ddfb9f55-7lpxz" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.301804 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/6796e6ff-3d28-4061-a0b4-cd8088da6919-image-import-ca\") pod \"apiserver-9ddfb9f55-7lpxz\" (UID: \"6796e6ff-3d28-4061-a0b4-cd8088da6919\") " pod="openshift-apiserver/apiserver-9ddfb9f55-7lpxz" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.301855 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/19280e75-8f04-47d1-bc42-124082dfd247-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-5gv2n\" (UID: \"19280e75-8f04-47d1-bc42-124082dfd247\") " pod="openshift-authentication/oauth-openshift-66458b6674-5gv2n" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.301883 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c1f2bd9d-a01b-4672-b4b1-f88057b52f08-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-trbkq\" (UID: \"c1f2bd9d-a01b-4672-b4b1-f88057b52f08\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-trbkq" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.301915 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/19280e75-8f04-47d1-bc42-124082dfd247-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-5gv2n\" (UID: \"19280e75-8f04-47d1-bc42-124082dfd247\") " pod="openshift-authentication/oauth-openshift-66458b6674-5gv2n" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.301957 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1968a714-512b-40f9-a302-f8905b0855fd-client-ca\") pod \"route-controller-manager-776cdc94d6-s55xm\" (UID: \"1968a714-512b-40f9-a302-f8905b0855fd\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-s55xm" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.301993 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/19280e75-8f04-47d1-bc42-124082dfd247-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-5gv2n\" (UID: \"19280e75-8f04-47d1-bc42-124082dfd247\") " pod="openshift-authentication/oauth-openshift-66458b6674-5gv2n" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.302015 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1968a714-512b-40f9-a302-f8905b0855fd-serving-cert\") pod \"route-controller-manager-776cdc94d6-s55xm\" (UID: \"1968a714-512b-40f9-a302-f8905b0855fd\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-s55xm" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.302032 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1f2bd9d-a01b-4672-b4b1-f88057b52f08-config\") pod \"controller-manager-65b6cccf98-trbkq\" (UID: \"c1f2bd9d-a01b-4672-b4b1-f88057b52f08\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-trbkq" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.302068 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdj49\" (UniqueName: \"kubernetes.io/projected/0d6c776b-eaf0-4068-983b-d848bbc96323-kube-api-access-kdj49\") pod \"cluster-samples-operator-6b564684c8-mn7c4\" (UID: \"0d6c776b-eaf0-4068-983b-d848bbc96323\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-mn7c4" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.302090 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/19280e75-8f04-47d1-bc42-124082dfd247-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-5gv2n\" (UID: \"19280e75-8f04-47d1-bc42-124082dfd247\") " pod="openshift-authentication/oauth-openshift-66458b6674-5gv2n" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.302124 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-jdqmz"] Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.302195 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.302313 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.302365 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-8596bd845d-8t6fc" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.302125 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/1202d380-a207-455c-8bd8-2b82e7974afa-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-r2gm9\" (UID: \"1202d380-a207-455c-8bd8-2b82e7974afa\") " pod="openshift-machine-api/machine-api-operator-755bb95488-r2gm9" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.302435 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5r4q\" (UniqueName: \"kubernetes.io/projected/c1f2bd9d-a01b-4672-b4b1-f88057b52f08-kube-api-access-c5r4q\") pod \"controller-manager-65b6cccf98-trbkq\" (UID: \"c1f2bd9d-a01b-4672-b4b1-f88057b52f08\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-trbkq" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.302456 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1f2bd9d-a01b-4672-b4b1-f88057b52f08-serving-cert\") pod \"controller-manager-65b6cccf98-trbkq\" (UID: \"c1f2bd9d-a01b-4672-b4b1-f88057b52f08\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-trbkq" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.302476 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1202d380-a207-455c-8bd8-2b82e7974afa-config\") pod \"machine-api-operator-755bb95488-r2gm9\" (UID: \"1202d380-a207-455c-8bd8-2b82e7974afa\") " pod="openshift-machine-api/machine-api-operator-755bb95488-r2gm9" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.302492 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4580359c-1bab-4bde-a783-bc3866e460a0-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-c5g8n\" (UID: \"4580359c-1bab-4bde-a783-bc3866e460a0\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-c5g8n" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.302509 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btb6d\" (UniqueName: \"kubernetes.io/projected/4580359c-1bab-4bde-a783-bc3866e460a0-kube-api-access-btb6d\") pod \"openshift-apiserver-operator-846cbfc458-c5g8n\" (UID: \"4580359c-1bab-4bde-a783-bc3866e460a0\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-c5g8n" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.302527 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mq2ng\" (UniqueName: \"kubernetes.io/projected/6796e6ff-3d28-4061-a0b4-cd8088da6919-kube-api-access-mq2ng\") pod \"apiserver-9ddfb9f55-7lpxz\" (UID: \"6796e6ff-3d28-4061-a0b4-cd8088da6919\") " pod="openshift-apiserver/apiserver-9ddfb9f55-7lpxz" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.302540 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c1f2bd9d-a01b-4672-b4b1-f88057b52f08-tmp\") pod \"controller-manager-65b6cccf98-trbkq\" (UID: \"c1f2bd9d-a01b-4672-b4b1-f88057b52f08\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-trbkq" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.302559 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/19280e75-8f04-47d1-bc42-124082dfd247-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-5gv2n\" (UID: \"19280e75-8f04-47d1-bc42-124082dfd247\") " pod="openshift-authentication/oauth-openshift-66458b6674-5gv2n" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.302579 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6796e6ff-3d28-4061-a0b4-cd8088da6919-node-pullsecrets\") pod \"apiserver-9ddfb9f55-7lpxz\" (UID: \"6796e6ff-3d28-4061-a0b4-cd8088da6919\") " pod="openshift-apiserver/apiserver-9ddfb9f55-7lpxz" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.302594 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6796e6ff-3d28-4061-a0b4-cd8088da6919-audit-dir\") pod \"apiserver-9ddfb9f55-7lpxz\" (UID: \"6796e6ff-3d28-4061-a0b4-cd8088da6919\") " pod="openshift-apiserver/apiserver-9ddfb9f55-7lpxz" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.302599 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d44f6ddf-xbtg4" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.302609 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/19280e75-8f04-47d1-bc42-124082dfd247-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-5gv2n\" (UID: \"19280e75-8f04-47d1-bc42-124082dfd247\") " pod="openshift-authentication/oauth-openshift-66458b6674-5gv2n" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.302625 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/19280e75-8f04-47d1-bc42-124082dfd247-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-5gv2n\" (UID: \"19280e75-8f04-47d1-bc42-124082dfd247\") " pod="openshift-authentication/oauth-openshift-66458b6674-5gv2n" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.302640 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/19280e75-8f04-47d1-bc42-124082dfd247-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-5gv2n\" (UID: \"19280e75-8f04-47d1-bc42-124082dfd247\") " pod="openshift-authentication/oauth-openshift-66458b6674-5gv2n" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.302652 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.302654 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1202d380-a207-455c-8bd8-2b82e7974afa-images\") pod \"machine-api-operator-755bb95488-r2gm9\" (UID: \"1202d380-a207-455c-8bd8-2b82e7974afa\") " pod="openshift-machine-api/machine-api-operator-755bb95488-r2gm9" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.302778 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/0d6c776b-eaf0-4068-983b-d848bbc96323-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-mn7c4\" (UID: \"0d6c776b-eaf0-4068-983b-d848bbc96323\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-mn7c4" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.302800 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79567875-e72c-4685-8919-03cda9a6f644-config\") pod \"authentication-operator-7f5c659b84-5zl5v\" (UID: \"79567875-e72c-4685-8919-03cda9a6f644\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-5zl5v" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.302824 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/6796e6ff-3d28-4061-a0b4-cd8088da6919-audit\") pod \"apiserver-9ddfb9f55-7lpxz\" (UID: \"6796e6ff-3d28-4061-a0b4-cd8088da6919\") " pod="openshift-apiserver/apiserver-9ddfb9f55-7lpxz" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.304741 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.305211 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.305298 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.305765 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-9zfdd"] Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.305852 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.306015 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-5777786469-jdqmz" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.306308 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.306423 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.306596 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.306616 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.307913 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.307950 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.308320 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-747b44746d-pdh68"] Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.319310 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-tlb84"] Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.320732 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-9zfdd" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.320968 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-747b44746d-pdh68" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.324495 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.324655 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.324725 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.325187 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.325343 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"serving-cert\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.325663 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-client\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.326800 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.328702 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.329491 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.329544 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"audit-1\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.329735 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.329926 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.330131 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.330533 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-cfnkd"] Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.330925 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-tlb84" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.331727 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.331774 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.334424 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.335245 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-89cvg"] Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.335650 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.335810 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-69b85846b6-cfnkd" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.338903 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.341054 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-89cvg" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.342852 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-jnbtq"] Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.351887 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-tgjx7"] Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.352039 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-799b87ffcd-jnbtq" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.355787 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-w2r2c"] Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.355905 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-tgjx7" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.359038 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"config-operator-serving-cert\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.365651 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-trbkq"] Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.365682 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-5gv2n"] Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.365693 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-c8s6j"] Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.366070 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-w2r2c" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.369589 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-bg6nr"] Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.369751 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-c8s6j" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.373774 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-68cf44c8b8-jrk8q"] Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.373899 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-bg6nr" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.378319 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-5wlqh"] Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.378453 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-68cf44c8b8-jrk8q" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.379604 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.382481 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-dnqp8"] Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.382703 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-5wlqh" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.384718 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-s4czg"] Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.384828 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-dnqp8" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.388089 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-s55xm"] Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.388266 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-wkjhb"] Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.388231 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-s4czg" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.396782 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-p7jcp"] Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.398330 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-69db94689b-wkjhb" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.406276 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/19280e75-8f04-47d1-bc42-124082dfd247-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-5gv2n\" (UID: \"19280e75-8f04-47d1-bc42-124082dfd247\") " pod="openshift-authentication/oauth-openshift-66458b6674-5gv2n" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.406604 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6c9c3fd8-7f40-4eb4-9a40-6e13fec6579f-serving-cert\") pod \"openshift-config-operator-5777786469-jdqmz\" (UID: \"6c9c3fd8-7f40-4eb4-9a40-6e13fec6579f\") " pod="openshift-config-operator/openshift-config-operator-5777786469-jdqmz" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.406879 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/19280e75-8f04-47d1-bc42-124082dfd247-audit-dir\") pod \"oauth-openshift-66458b6674-5gv2n\" (UID: \"19280e75-8f04-47d1-bc42-124082dfd247\") " pod="openshift-authentication/oauth-openshift-66458b6674-5gv2n" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.407093 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1968a714-512b-40f9-a302-f8905b0855fd-tmp\") pod \"route-controller-manager-776cdc94d6-s55xm\" (UID: \"1968a714-512b-40f9-a302-f8905b0855fd\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-s55xm" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.407492 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d042745d-98a0-44c8-ac92-7704d8b43b84-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-tgjx7\" (UID: \"d042745d-98a0-44c8-ac92-7704d8b43b84\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-tgjx7" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.407605 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/19280e75-8f04-47d1-bc42-124082dfd247-audit-dir\") pod \"oauth-openshift-66458b6674-5gv2n\" (UID: \"19280e75-8f04-47d1-bc42-124082dfd247\") " pod="openshift-authentication/oauth-openshift-66458b6674-5gv2n" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.407700 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6796e6ff-3d28-4061-a0b4-cd8088da6919-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-7lpxz\" (UID: \"6796e6ff-3d28-4061-a0b4-cd8088da6919\") " pod="openshift-apiserver/apiserver-9ddfb9f55-7lpxz" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.407962 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtmdm\" (UniqueName: \"kubernetes.io/projected/940d93fe-9ecf-4274-9caf-6123a0ce203c-kube-api-access-vtmdm\") pod \"ingress-operator-6b9cb4dbcf-w2r2c\" (UID: \"940d93fe-9ecf-4274-9caf-6123a0ce203c\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-w2r2c" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.408133 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6796e6ff-3d28-4061-a0b4-cd8088da6919-config\") pod \"apiserver-9ddfb9f55-7lpxz\" (UID: \"6796e6ff-3d28-4061-a0b4-cd8088da6919\") " pod="openshift-apiserver/apiserver-9ddfb9f55-7lpxz" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.408249 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/6796e6ff-3d28-4061-a0b4-cd8088da6919-image-import-ca\") pod \"apiserver-9ddfb9f55-7lpxz\" (UID: \"6796e6ff-3d28-4061-a0b4-cd8088da6919\") " pod="openshift-apiserver/apiserver-9ddfb9f55-7lpxz" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.408353 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/19280e75-8f04-47d1-bc42-124082dfd247-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-5gv2n\" (UID: \"19280e75-8f04-47d1-bc42-124082dfd247\") " pod="openshift-authentication/oauth-openshift-66458b6674-5gv2n" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.408461 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c1f2bd9d-a01b-4672-b4b1-f88057b52f08-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-trbkq\" (UID: \"c1f2bd9d-a01b-4672-b4b1-f88057b52f08\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-trbkq" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.408801 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/19280e75-8f04-47d1-bc42-124082dfd247-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-5gv2n\" (UID: \"19280e75-8f04-47d1-bc42-124082dfd247\") " pod="openshift-authentication/oauth-openshift-66458b6674-5gv2n" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.408884 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1968a714-512b-40f9-a302-f8905b0855fd-client-ca\") pod \"route-controller-manager-776cdc94d6-s55xm\" (UID: \"1968a714-512b-40f9-a302-f8905b0855fd\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-s55xm" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.408972 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/19280e75-8f04-47d1-bc42-124082dfd247-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-5gv2n\" (UID: \"19280e75-8f04-47d1-bc42-124082dfd247\") " pod="openshift-authentication/oauth-openshift-66458b6674-5gv2n" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.407980 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1968a714-512b-40f9-a302-f8905b0855fd-tmp\") pod \"route-controller-manager-776cdc94d6-s55xm\" (UID: \"1968a714-512b-40f9-a302-f8905b0855fd\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-s55xm" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.409203 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/19280e75-8f04-47d1-bc42-124082dfd247-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-5gv2n\" (UID: \"19280e75-8f04-47d1-bc42-124082dfd247\") " pod="openshift-authentication/oauth-openshift-66458b6674-5gv2n" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.409267 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f2cd766-1945-4e3d-aa8a-4045eacb2ff8-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-89cvg\" (UID: \"1f2cd766-1945-4e3d-aa8a-4045eacb2ff8\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-89cvg" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.409297 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4d8423b3-e68c-4083-859f-e89f705f28bd-audit-policies\") pod \"apiserver-8596bd845d-8t6fc\" (UID: \"4d8423b3-e68c-4083-859f-e89f705f28bd\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-8t6fc" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.409802 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd4c6f53-d565-473d-9d09-b5190fa3d71a-config\") pod \"kube-controller-manager-operator-69d5f845f8-6d476\" (UID: \"bd4c6f53-d565-473d-9d09-b5190fa3d71a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-6d476" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.409791 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6796e6ff-3d28-4061-a0b4-cd8088da6919-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-7lpxz\" (UID: \"6796e6ff-3d28-4061-a0b4-cd8088da6919\") " pod="openshift-apiserver/apiserver-9ddfb9f55-7lpxz" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.409892 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1968a714-512b-40f9-a302-f8905b0855fd-serving-cert\") pod \"route-controller-manager-776cdc94d6-s55xm\" (UID: \"1968a714-512b-40f9-a302-f8905b0855fd\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-s55xm" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.409954 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1f2bd9d-a01b-4672-b4b1-f88057b52f08-config\") pod \"controller-manager-65b6cccf98-trbkq\" (UID: \"c1f2bd9d-a01b-4672-b4b1-f88057b52f08\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-trbkq" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.410100 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.410146 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6796e6ff-3d28-4061-a0b4-cd8088da6919-config\") pod \"apiserver-9ddfb9f55-7lpxz\" (UID: \"6796e6ff-3d28-4061-a0b4-cd8088da6919\") " pod="openshift-apiserver/apiserver-9ddfb9f55-7lpxz" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.410195 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5k9z\" (UniqueName: \"kubernetes.io/projected/ae767afd-59d5-4c04-9ecc-f9ae7b317698-kube-api-access-j5k9z\") pod \"image-pruner-29482560-n7qwb\" (UID: \"ae767afd-59d5-4c04-9ecc-f9ae7b317698\") " pod="openshift-image-registry/image-pruner-29482560-n7qwb" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.410273 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kdj49\" (UniqueName: \"kubernetes.io/projected/0d6c776b-eaf0-4068-983b-d848bbc96323-kube-api-access-kdj49\") pod \"cluster-samples-operator-6b564684c8-mn7c4\" (UID: \"0d6c776b-eaf0-4068-983b-d848bbc96323\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-mn7c4" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.410360 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4d8423b3-e68c-4083-859f-e89f705f28bd-etcd-serving-ca\") pod \"apiserver-8596bd845d-8t6fc\" (UID: \"4d8423b3-e68c-4083-859f-e89f705f28bd\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-8t6fc" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.410444 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/19280e75-8f04-47d1-bc42-124082dfd247-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-5gv2n\" (UID: \"19280e75-8f04-47d1-bc42-124082dfd247\") " pod="openshift-authentication/oauth-openshift-66458b6674-5gv2n" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.410499 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4d8423b3-e68c-4083-859f-e89f705f28bd-etcd-client\") pod \"apiserver-8596bd845d-8t6fc\" (UID: \"4d8423b3-e68c-4083-859f-e89f705f28bd\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-8t6fc" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.410535 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6d7mc\" (UniqueName: \"kubernetes.io/projected/f2431df6-6390-4fb8-b13e-56750ad2fed4-kube-api-access-6d7mc\") pod \"downloads-747b44746d-pdh68\" (UID: \"f2431df6-6390-4fb8-b13e-56750ad2fed4\") " pod="openshift-console/downloads-747b44746d-pdh68" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.410567 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/940d93fe-9ecf-4274-9caf-6123a0ce203c-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-w2r2c\" (UID: \"940d93fe-9ecf-4274-9caf-6123a0ce203c\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-w2r2c" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.410597 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d042745d-98a0-44c8-ac92-7704d8b43b84-config\") pod \"openshift-kube-scheduler-operator-54f497555d-tgjx7\" (UID: \"d042745d-98a0-44c8-ac92-7704d8b43b84\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-tgjx7" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.410667 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d042745d-98a0-44c8-ac92-7704d8b43b84-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-tgjx7\" (UID: \"d042745d-98a0-44c8-ac92-7704d8b43b84\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-tgjx7" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.412257 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1f2bd9d-a01b-4672-b4b1-f88057b52f08-config\") pod \"controller-manager-65b6cccf98-trbkq\" (UID: \"c1f2bd9d-a01b-4672-b4b1-f88057b52f08\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-trbkq" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.413120 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/1202d380-a207-455c-8bd8-2b82e7974afa-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-r2gm9\" (UID: \"1202d380-a207-455c-8bd8-2b82e7974afa\") " pod="openshift-machine-api/machine-api-operator-755bb95488-r2gm9" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.413272 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bd4c6f53-d565-473d-9d09-b5190fa3d71a-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-6d476\" (UID: \"bd4c6f53-d565-473d-9d09-b5190fa3d71a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-6d476" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.413323 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/65be3f94-f1d5-4ebb-933f-216e1650f309-tmp-dir\") pod \"kube-apiserver-operator-575994946d-c8s6j\" (UID: \"65be3f94-f1d5-4ebb-933f-216e1650f309\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-c8s6j" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.413395 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1968a714-512b-40f9-a302-f8905b0855fd-client-ca\") pod \"route-controller-manager-776cdc94d6-s55xm\" (UID: \"1968a714-512b-40f9-a302-f8905b0855fd\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-s55xm" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.413467 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-c5r4q\" (UniqueName: \"kubernetes.io/projected/c1f2bd9d-a01b-4672-b4b1-f88057b52f08-kube-api-access-c5r4q\") pod \"controller-manager-65b6cccf98-trbkq\" (UID: \"c1f2bd9d-a01b-4672-b4b1-f88057b52f08\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-trbkq" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.413513 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1f2bd9d-a01b-4672-b4b1-f88057b52f08-serving-cert\") pod \"controller-manager-65b6cccf98-trbkq\" (UID: \"c1f2bd9d-a01b-4672-b4b1-f88057b52f08\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-trbkq" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.413581 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1202d380-a207-455c-8bd8-2b82e7974afa-config\") pod \"machine-api-operator-755bb95488-r2gm9\" (UID: \"1202d380-a207-455c-8bd8-2b82e7974afa\") " pod="openshift-machine-api/machine-api-operator-755bb95488-r2gm9" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.413634 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4brfs\" (UniqueName: \"kubernetes.io/projected/ba04635f-4c5f-4669-af58-97627beae1b2-kube-api-access-4brfs\") pod \"console-operator-67c89758df-5ds28\" (UID: \"ba04635f-4c5f-4669-af58-97627beae1b2\") " pod="openshift-console-operator/console-operator-67c89758df-5ds28" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.413669 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4580359c-1bab-4bde-a783-bc3866e460a0-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-c5g8n\" (UID: \"4580359c-1bab-4bde-a783-bc3866e460a0\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-c5g8n" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.413697 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-btb6d\" (UniqueName: \"kubernetes.io/projected/4580359c-1bab-4bde-a783-bc3866e460a0-kube-api-access-btb6d\") pod \"openshift-apiserver-operator-846cbfc458-c5g8n\" (UID: \"4580359c-1bab-4bde-a783-bc3866e460a0\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-c5g8n" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.413722 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4d8423b3-e68c-4083-859f-e89f705f28bd-audit-dir\") pod \"apiserver-8596bd845d-8t6fc\" (UID: \"4d8423b3-e68c-4083-859f-e89f705f28bd\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-8t6fc" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.413781 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/65be3f94-f1d5-4ebb-933f-216e1650f309-kube-api-access\") pod \"kube-apiserver-operator-575994946d-c8s6j\" (UID: \"65be3f94-f1d5-4ebb-933f-216e1650f309\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-c8s6j" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.414075 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mq2ng\" (UniqueName: \"kubernetes.io/projected/6796e6ff-3d28-4061-a0b4-cd8088da6919-kube-api-access-mq2ng\") pod \"apiserver-9ddfb9f55-7lpxz\" (UID: \"6796e6ff-3d28-4061-a0b4-cd8088da6919\") " pod="openshift-apiserver/apiserver-9ddfb9f55-7lpxz" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.414136 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c1f2bd9d-a01b-4672-b4b1-f88057b52f08-tmp\") pod \"controller-manager-65b6cccf98-trbkq\" (UID: \"c1f2bd9d-a01b-4672-b4b1-f88057b52f08\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-trbkq" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.414214 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c1f2bd9d-a01b-4672-b4b1-f88057b52f08-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-trbkq\" (UID: \"c1f2bd9d-a01b-4672-b4b1-f88057b52f08\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-trbkq" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.414258 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1f2cd766-1945-4e3d-aa8a-4045eacb2ff8-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-89cvg\" (UID: \"1f2cd766-1945-4e3d-aa8a-4045eacb2ff8\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-89cvg" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.414291 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4d8423b3-e68c-4083-859f-e89f705f28bd-serving-cert\") pod \"apiserver-8596bd845d-8t6fc\" (UID: \"4d8423b3-e68c-4083-859f-e89f705f28bd\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-8t6fc" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.414340 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/19280e75-8f04-47d1-bc42-124082dfd247-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-5gv2n\" (UID: \"19280e75-8f04-47d1-bc42-124082dfd247\") " pod="openshift-authentication/oauth-openshift-66458b6674-5gv2n" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.414362 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba04635f-4c5f-4669-af58-97627beae1b2-serving-cert\") pod \"console-operator-67c89758df-5ds28\" (UID: \"ba04635f-4c5f-4669-af58-97627beae1b2\") " pod="openshift-console-operator/console-operator-67c89758df-5ds28" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.414388 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6796e6ff-3d28-4061-a0b4-cd8088da6919-node-pullsecrets\") pod \"apiserver-9ddfb9f55-7lpxz\" (UID: \"6796e6ff-3d28-4061-a0b4-cd8088da6919\") " pod="openshift-apiserver/apiserver-9ddfb9f55-7lpxz" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.414403 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6796e6ff-3d28-4061-a0b4-cd8088da6919-audit-dir\") pod \"apiserver-9ddfb9f55-7lpxz\" (UID: \"6796e6ff-3d28-4061-a0b4-cd8088da6919\") " pod="openshift-apiserver/apiserver-9ddfb9f55-7lpxz" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.414422 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/19280e75-8f04-47d1-bc42-124082dfd247-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-5gv2n\" (UID: \"19280e75-8f04-47d1-bc42-124082dfd247\") " pod="openshift-authentication/oauth-openshift-66458b6674-5gv2n" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.414438 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/19280e75-8f04-47d1-bc42-124082dfd247-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-5gv2n\" (UID: \"19280e75-8f04-47d1-bc42-124082dfd247\") " pod="openshift-authentication/oauth-openshift-66458b6674-5gv2n" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.414458 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/19280e75-8f04-47d1-bc42-124082dfd247-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-5gv2n\" (UID: \"19280e75-8f04-47d1-bc42-124082dfd247\") " pod="openshift-authentication/oauth-openshift-66458b6674-5gv2n" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.414475 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1202d380-a207-455c-8bd8-2b82e7974afa-images\") pod \"machine-api-operator-755bb95488-r2gm9\" (UID: \"1202d380-a207-455c-8bd8-2b82e7974afa\") " pod="openshift-machine-api/machine-api-operator-755bb95488-r2gm9" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.414491 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/0d6c776b-eaf0-4068-983b-d848bbc96323-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-mn7c4\" (UID: \"0d6c776b-eaf0-4068-983b-d848bbc96323\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-mn7c4" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.414509 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79567875-e72c-4685-8919-03cda9a6f644-config\") pod \"authentication-operator-7f5c659b84-5zl5v\" (UID: \"79567875-e72c-4685-8919-03cda9a6f644\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-5zl5v" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.414531 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba04635f-4c5f-4669-af58-97627beae1b2-config\") pod \"console-operator-67c89758df-5ds28\" (UID: \"ba04635f-4c5f-4669-af58-97627beae1b2\") " pod="openshift-console-operator/console-operator-67c89758df-5ds28" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.414534 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6796e6ff-3d28-4061-a0b4-cd8088da6919-node-pullsecrets\") pod \"apiserver-9ddfb9f55-7lpxz\" (UID: \"6796e6ff-3d28-4061-a0b4-cd8088da6919\") " pod="openshift-apiserver/apiserver-9ddfb9f55-7lpxz" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.414552 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/6796e6ff-3d28-4061-a0b4-cd8088da6919-audit\") pod \"apiserver-9ddfb9f55-7lpxz\" (UID: \"6796e6ff-3d28-4061-a0b4-cd8088da6919\") " pod="openshift-apiserver/apiserver-9ddfb9f55-7lpxz" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.414895 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/609947aa-6e7c-439b-a7a7-7d06f8ab4f1c-config\") pod \"machine-approver-54c688565-cnnpj\" (UID: \"609947aa-6e7c-439b-a7a7-7d06f8ab4f1c\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-cnnpj" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.414924 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4d8423b3-e68c-4083-859f-e89f705f28bd-encryption-config\") pod \"apiserver-8596bd845d-8t6fc\" (UID: \"4d8423b3-e68c-4083-859f-e89f705f28bd\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-8t6fc" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.414943 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/bd4c6f53-d565-473d-9d09-b5190fa3d71a-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-6d476\" (UID: \"bd4c6f53-d565-473d-9d09-b5190fa3d71a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-6d476" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.414969 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6796e6ff-3d28-4061-a0b4-cd8088da6919-encryption-config\") pod \"apiserver-9ddfb9f55-7lpxz\" (UID: \"6796e6ff-3d28-4061-a0b4-cd8088da6919\") " pod="openshift-apiserver/apiserver-9ddfb9f55-7lpxz" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.414987 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c1f2bd9d-a01b-4672-b4b1-f88057b52f08-client-ca\") pod \"controller-manager-65b6cccf98-trbkq\" (UID: \"c1f2bd9d-a01b-4672-b4b1-f88057b52f08\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-trbkq" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.415007 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/79567875-e72c-4685-8919-03cda9a6f644-serving-cert\") pod \"authentication-operator-7f5c659b84-5zl5v\" (UID: \"79567875-e72c-4685-8919-03cda9a6f644\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-5zl5v" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.415027 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/19280e75-8f04-47d1-bc42-124082dfd247-audit-policies\") pod \"oauth-openshift-66458b6674-5gv2n\" (UID: \"19280e75-8f04-47d1-bc42-124082dfd247\") " pod="openshift-authentication/oauth-openshift-66458b6674-5gv2n" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.415052 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/19280e75-8f04-47d1-bc42-124082dfd247-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-5gv2n\" (UID: \"19280e75-8f04-47d1-bc42-124082dfd247\") " pod="openshift-authentication/oauth-openshift-66458b6674-5gv2n" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.415070 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/609947aa-6e7c-439b-a7a7-7d06f8ab4f1c-auth-proxy-config\") pod \"machine-approver-54c688565-cnnpj\" (UID: \"609947aa-6e7c-439b-a7a7-7d06f8ab4f1c\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-cnnpj" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.415088 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmt9z\" (UniqueName: \"kubernetes.io/projected/609947aa-6e7c-439b-a7a7-7d06f8ab4f1c-kube-api-access-xmt9z\") pod \"machine-approver-54c688565-cnnpj\" (UID: \"609947aa-6e7c-439b-a7a7-7d06f8ab4f1c\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-cnnpj" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.415237 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4d8423b3-e68c-4083-859f-e89f705f28bd-trusted-ca-bundle\") pod \"apiserver-8596bd845d-8t6fc\" (UID: \"4d8423b3-e68c-4083-859f-e89f705f28bd\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-8t6fc" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.415333 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-nkx6r"] Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.415466 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/940d93fe-9ecf-4274-9caf-6123a0ce203c-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-w2r2c\" (UID: \"940d93fe-9ecf-4274-9caf-6123a0ce203c\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-w2r2c" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.415579 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/19280e75-8f04-47d1-bc42-124082dfd247-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-5gv2n\" (UID: \"19280e75-8f04-47d1-bc42-124082dfd247\") " pod="openshift-authentication/oauth-openshift-66458b6674-5gv2n" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.415671 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1968a714-512b-40f9-a302-f8905b0855fd-config\") pod \"route-controller-manager-776cdc94d6-s55xm\" (UID: \"1968a714-512b-40f9-a302-f8905b0855fd\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-s55xm" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.415749 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/39000d94-11b8-42ff-a127-2136d0f2cc0b-tmp-dir\") pod \"dns-operator-799b87ffcd-jnbtq\" (UID: \"39000d94-11b8-42ff-a127-2136d0f2cc0b\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-jnbtq" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.415831 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/ae767afd-59d5-4c04-9ecc-f9ae7b317698-serviceca\") pod \"image-pruner-29482560-n7qwb\" (UID: \"ae767afd-59d5-4c04-9ecc-f9ae7b317698\") " pod="openshift-image-registry/image-pruner-29482560-n7qwb" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.415901 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d042745d-98a0-44c8-ac92-7704d8b43b84-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-tgjx7\" (UID: \"d042745d-98a0-44c8-ac92-7704d8b43b84\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-tgjx7" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.415986 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4580359c-1bab-4bde-a783-bc3866e460a0-config\") pod \"openshift-apiserver-operator-846cbfc458-c5g8n\" (UID: \"4580359c-1bab-4bde-a783-bc3866e460a0\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-c5g8n" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.416067 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rgpcn\" (UniqueName: \"kubernetes.io/projected/1202d380-a207-455c-8bd8-2b82e7974afa-kube-api-access-rgpcn\") pod \"machine-api-operator-755bb95488-r2gm9\" (UID: \"1202d380-a207-455c-8bd8-2b82e7974afa\") " pod="openshift-machine-api/machine-api-operator-755bb95488-r2gm9" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.416135 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/609947aa-6e7c-439b-a7a7-7d06f8ab4f1c-machine-approver-tls\") pod \"machine-approver-54c688565-cnnpj\" (UID: \"609947aa-6e7c-439b-a7a7-7d06f8ab4f1c\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-cnnpj" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.416217 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1202d380-a207-455c-8bd8-2b82e7974afa-images\") pod \"machine-api-operator-755bb95488-r2gm9\" (UID: \"1202d380-a207-455c-8bd8-2b82e7974afa\") " pod="openshift-machine-api/machine-api-operator-755bb95488-r2gm9" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.415851 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/19280e75-8f04-47d1-bc42-124082dfd247-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-5gv2n\" (UID: \"19280e75-8f04-47d1-bc42-124082dfd247\") " pod="openshift-authentication/oauth-openshift-66458b6674-5gv2n" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.414564 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c1f2bd9d-a01b-4672-b4b1-f88057b52f08-tmp\") pod \"controller-manager-65b6cccf98-trbkq\" (UID: \"c1f2bd9d-a01b-4672-b4b1-f88057b52f08\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-trbkq" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.415762 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-p7jcp" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.414576 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1202d380-a207-455c-8bd8-2b82e7974afa-config\") pod \"machine-api-operator-755bb95488-r2gm9\" (UID: \"1202d380-a207-455c-8bd8-2b82e7974afa\") " pod="openshift-machine-api/machine-api-operator-755bb95488-r2gm9" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.418082 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/6796e6ff-3d28-4061-a0b4-cd8088da6919-image-import-ca\") pod \"apiserver-9ddfb9f55-7lpxz\" (UID: \"6796e6ff-3d28-4061-a0b4-cd8088da6919\") " pod="openshift-apiserver/apiserver-9ddfb9f55-7lpxz" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.418124 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/6796e6ff-3d28-4061-a0b4-cd8088da6919-audit\") pod \"apiserver-9ddfb9f55-7lpxz\" (UID: \"6796e6ff-3d28-4061-a0b4-cd8088da6919\") " pod="openshift-apiserver/apiserver-9ddfb9f55-7lpxz" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.419323 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79567875-e72c-4685-8919-03cda9a6f644-config\") pod \"authentication-operator-7f5c659b84-5zl5v\" (UID: \"79567875-e72c-4685-8919-03cda9a6f644\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-5zl5v" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.419462 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/19280e75-8f04-47d1-bc42-124082dfd247-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-5gv2n\" (UID: \"19280e75-8f04-47d1-bc42-124082dfd247\") " pod="openshift-authentication/oauth-openshift-66458b6674-5gv2n" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.419815 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1968a714-512b-40f9-a302-f8905b0855fd-config\") pod \"route-controller-manager-776cdc94d6-s55xm\" (UID: \"1968a714-512b-40f9-a302-f8905b0855fd\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-s55xm" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.415938 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/19280e75-8f04-47d1-bc42-124082dfd247-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-5gv2n\" (UID: \"19280e75-8f04-47d1-bc42-124082dfd247\") " pod="openshift-authentication/oauth-openshift-66458b6674-5gv2n" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.419986 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/19280e75-8f04-47d1-bc42-124082dfd247-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-5gv2n\" (UID: \"19280e75-8f04-47d1-bc42-124082dfd247\") " pod="openshift-authentication/oauth-openshift-66458b6674-5gv2n" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.414966 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6796e6ff-3d28-4061-a0b4-cd8088da6919-audit-dir\") pod \"apiserver-9ddfb9f55-7lpxz\" (UID: \"6796e6ff-3d28-4061-a0b4-cd8088da6919\") " pod="openshift-apiserver/apiserver-9ddfb9f55-7lpxz" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.420370 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/19280e75-8f04-47d1-bc42-124082dfd247-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-5gv2n\" (UID: \"19280e75-8f04-47d1-bc42-124082dfd247\") " pod="openshift-authentication/oauth-openshift-66458b6674-5gv2n" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.420782 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/19280e75-8f04-47d1-bc42-124082dfd247-audit-policies\") pod \"oauth-openshift-66458b6674-5gv2n\" (UID: \"19280e75-8f04-47d1-bc42-124082dfd247\") " pod="openshift-authentication/oauth-openshift-66458b6674-5gv2n" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.421092 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-operator-tls\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.421268 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/19280e75-8f04-47d1-bc42-124082dfd247-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-5gv2n\" (UID: \"19280e75-8f04-47d1-bc42-124082dfd247\") " pod="openshift-authentication/oauth-openshift-66458b6674-5gv2n" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.421615 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c1f2bd9d-a01b-4672-b4b1-f88057b52f08-client-ca\") pod \"controller-manager-65b6cccf98-trbkq\" (UID: \"c1f2bd9d-a01b-4672-b4b1-f88057b52f08\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-trbkq" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.422931 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/0d6c776b-eaf0-4068-983b-d848bbc96323-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-mn7c4\" (UID: \"0d6c776b-eaf0-4068-983b-d848bbc96323\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-mn7c4" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.423334 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/79567875-e72c-4685-8919-03cda9a6f644-serving-cert\") pod \"authentication-operator-7f5c659b84-5zl5v\" (UID: \"79567875-e72c-4685-8919-03cda9a6f644\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-5zl5v" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.423696 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/1202d380-a207-455c-8bd8-2b82e7974afa-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-r2gm9\" (UID: \"1202d380-a207-455c-8bd8-2b82e7974afa\") " pod="openshift-machine-api/machine-api-operator-755bb95488-r2gm9" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.423840 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/19280e75-8f04-47d1-bc42-124082dfd247-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-5gv2n\" (UID: \"19280e75-8f04-47d1-bc42-124082dfd247\") " pod="openshift-authentication/oauth-openshift-66458b6674-5gv2n" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.423980 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/19280e75-8f04-47d1-bc42-124082dfd247-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-5gv2n\" (UID: \"19280e75-8f04-47d1-bc42-124082dfd247\") " pod="openshift-authentication/oauth-openshift-66458b6674-5gv2n" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.424609 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/19280e75-8f04-47d1-bc42-124082dfd247-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-5gv2n\" (UID: \"19280e75-8f04-47d1-bc42-124082dfd247\") " pod="openshift-authentication/oauth-openshift-66458b6674-5gv2n" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.424832 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4580359c-1bab-4bde-a783-bc3866e460a0-config\") pod \"openshift-apiserver-operator-846cbfc458-c5g8n\" (UID: \"4580359c-1bab-4bde-a783-bc3866e460a0\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-c5g8n" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.425110 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6796e6ff-3d28-4061-a0b4-cd8088da6919-encryption-config\") pod \"apiserver-9ddfb9f55-7lpxz\" (UID: \"6796e6ff-3d28-4061-a0b4-cd8088da6919\") " pod="openshift-apiserver/apiserver-9ddfb9f55-7lpxz" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.425623 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-2rnqz"] Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.425744 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-nkx6r" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.416226 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/6c9c3fd8-7f40-4eb4-9a40-6e13fec6579f-available-featuregates\") pod \"openshift-config-operator-5777786469-jdqmz\" (UID: \"6c9c3fd8-7f40-4eb4-9a40-6e13fec6579f\") " pod="openshift-config-operator/openshift-config-operator-5777786469-jdqmz" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.427330 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vp5z8\" (UniqueName: \"kubernetes.io/projected/6c9c3fd8-7f40-4eb4-9a40-6e13fec6579f-kube-api-access-vp5z8\") pod \"openshift-config-operator-5777786469-jdqmz\" (UID: \"6c9c3fd8-7f40-4eb4-9a40-6e13fec6579f\") " pod="openshift-config-operator/openshift-config-operator-5777786469-jdqmz" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.427427 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6796e6ff-3d28-4061-a0b4-cd8088da6919-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-7lpxz\" (UID: \"6796e6ff-3d28-4061-a0b4-cd8088da6919\") " pod="openshift-apiserver/apiserver-9ddfb9f55-7lpxz" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.427507 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79567875-e72c-4685-8919-03cda9a6f644-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-5zl5v\" (UID: \"79567875-e72c-4685-8919-03cda9a6f644\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-5zl5v" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.427630 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ba04635f-4c5f-4669-af58-97627beae1b2-trusted-ca\") pod \"console-operator-67c89758df-5ds28\" (UID: \"ba04635f-4c5f-4669-af58-97627beae1b2\") " pod="openshift-console-operator/console-operator-67c89758df-5ds28" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.427744 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bd4c6f53-d565-473d-9d09-b5190fa3d71a-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-6d476\" (UID: \"bd4c6f53-d565-473d-9d09-b5190fa3d71a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-6d476" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.427830 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qvfj\" (UniqueName: \"kubernetes.io/projected/39000d94-11b8-42ff-a127-2136d0f2cc0b-kube-api-access-8qvfj\") pod \"dns-operator-799b87ffcd-jnbtq\" (UID: \"39000d94-11b8-42ff-a127-2136d0f2cc0b\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-jnbtq" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.427940 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65be3f94-f1d5-4ebb-933f-216e1650f309-serving-cert\") pod \"kube-apiserver-operator-575994946d-c8s6j\" (UID: \"65be3f94-f1d5-4ebb-933f-216e1650f309\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-c8s6j" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.427997 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6796e6ff-3d28-4061-a0b4-cd8088da6919-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-7lpxz\" (UID: \"6796e6ff-3d28-4061-a0b4-cd8088da6919\") " pod="openshift-apiserver/apiserver-9ddfb9f55-7lpxz" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.428067 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-25pkl\" (UniqueName: \"kubernetes.io/projected/79567875-e72c-4685-8919-03cda9a6f644-kube-api-access-25pkl\") pod \"authentication-operator-7f5c659b84-5zl5v\" (UID: \"79567875-e72c-4685-8919-03cda9a6f644\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-5zl5v" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.428102 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f2cd766-1945-4e3d-aa8a-4045eacb2ff8-config\") pod \"openshift-controller-manager-operator-686468bdd5-89cvg\" (UID: \"1f2cd766-1945-4e3d-aa8a-4045eacb2ff8\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-89cvg" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.428130 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/940d93fe-9ecf-4274-9caf-6123a0ce203c-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-w2r2c\" (UID: \"940d93fe-9ecf-4274-9caf-6123a0ce203c\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-w2r2c" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.428151 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6796e6ff-3d28-4061-a0b4-cd8088da6919-etcd-client\") pod \"apiserver-9ddfb9f55-7lpxz\" (UID: \"6796e6ff-3d28-4061-a0b4-cd8088da6919\") " pod="openshift-apiserver/apiserver-9ddfb9f55-7lpxz" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.428211 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smrwt\" (UniqueName: \"kubernetes.io/projected/1f2cd766-1945-4e3d-aa8a-4045eacb2ff8-kube-api-access-smrwt\") pod \"openshift-controller-manager-operator-686468bdd5-89cvg\" (UID: \"1f2cd766-1945-4e3d-aa8a-4045eacb2ff8\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-89cvg" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.428232 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xlj8\" (UniqueName: \"kubernetes.io/projected/4d8423b3-e68c-4083-859f-e89f705f28bd-kube-api-access-2xlj8\") pod \"apiserver-8596bd845d-8t6fc\" (UID: \"4d8423b3-e68c-4083-859f-e89f705f28bd\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-8t6fc" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.428248 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65be3f94-f1d5-4ebb-933f-216e1650f309-config\") pod \"kube-apiserver-operator-575994946d-c8s6j\" (UID: \"65be3f94-f1d5-4ebb-933f-216e1650f309\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-c8s6j" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.427546 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/19280e75-8f04-47d1-bc42-124082dfd247-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-5gv2n\" (UID: \"19280e75-8f04-47d1-bc42-124082dfd247\") " pod="openshift-authentication/oauth-openshift-66458b6674-5gv2n" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.428305 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6796e6ff-3d28-4061-a0b4-cd8088da6919-serving-cert\") pod \"apiserver-9ddfb9f55-7lpxz\" (UID: \"6796e6ff-3d28-4061-a0b4-cd8088da6919\") " pod="openshift-apiserver/apiserver-9ddfb9f55-7lpxz" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.428403 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xz6wz\" (UniqueName: \"kubernetes.io/projected/19280e75-8f04-47d1-bc42-124082dfd247-kube-api-access-xz6wz\") pod \"oauth-openshift-66458b6674-5gv2n\" (UID: \"19280e75-8f04-47d1-bc42-124082dfd247\") " pod="openshift-authentication/oauth-openshift-66458b6674-5gv2n" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.428552 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jct6m\" (UniqueName: \"kubernetes.io/projected/1968a714-512b-40f9-a302-f8905b0855fd-kube-api-access-jct6m\") pod \"route-controller-manager-776cdc94d6-s55xm\" (UID: \"1968a714-512b-40f9-a302-f8905b0855fd\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-s55xm" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.428797 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79567875-e72c-4685-8919-03cda9a6f644-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-5zl5v\" (UID: \"79567875-e72c-4685-8919-03cda9a6f644\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-5zl5v" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.428913 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79567875-e72c-4685-8919-03cda9a6f644-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-5zl5v\" (UID: \"79567875-e72c-4685-8919-03cda9a6f644\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-5zl5v" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.428998 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/39000d94-11b8-42ff-a127-2136d0f2cc0b-metrics-tls\") pod \"dns-operator-799b87ffcd-jnbtq\" (UID: \"39000d94-11b8-42ff-a127-2136d0f2cc0b\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-jnbtq" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.429896 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1f2bd9d-a01b-4672-b4b1-f88057b52f08-serving-cert\") pod \"controller-manager-65b6cccf98-trbkq\" (UID: \"c1f2bd9d-a01b-4672-b4b1-f88057b52f08\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-trbkq" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.430031 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1968a714-512b-40f9-a302-f8905b0855fd-serving-cert\") pod \"route-controller-manager-776cdc94d6-s55xm\" (UID: \"1968a714-512b-40f9-a302-f8905b0855fd\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-s55xm" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.430139 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79567875-e72c-4685-8919-03cda9a6f644-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-5zl5v\" (UID: \"79567875-e72c-4685-8919-03cda9a6f644\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-5zl5v" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.431673 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6796e6ff-3d28-4061-a0b4-cd8088da6919-serving-cert\") pod \"apiserver-9ddfb9f55-7lpxz\" (UID: \"6796e6ff-3d28-4061-a0b4-cd8088da6919\") " pod="openshift-apiserver/apiserver-9ddfb9f55-7lpxz" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.433071 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6796e6ff-3d28-4061-a0b4-cd8088da6919-etcd-client\") pod \"apiserver-9ddfb9f55-7lpxz\" (UID: \"6796e6ff-3d28-4061-a0b4-cd8088da6919\") " pod="openshift-apiserver/apiserver-9ddfb9f55-7lpxz" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.433430 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4580359c-1bab-4bde-a783-bc3866e460a0-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-c5g8n\" (UID: \"4580359c-1bab-4bde-a783-bc3866e460a0\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-c5g8n" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.433617 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-5r9pr"] Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.433826 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-2rnqz" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.437129 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-c5g8n"] Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.437249 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-pruner-29482560-n7qwb"] Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.437332 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-7lpxz"] Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.437399 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-r2gm9"] Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.437478 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-6d476"] Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.437543 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-cfnkd"] Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.437603 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-67c89758df-5ds28"] Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.437708 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-dnqp8"] Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.437880 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-5zl5v"] Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.437964 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-jnbtq"] Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.438029 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-8t6fc"] Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.438098 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-tgjx7"] Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.437280 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-5r9pr" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.438174 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-jdqmz"] Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.438359 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-p7jcp"] Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.438434 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-tlb84"] Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.438739 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-mn7c4"] Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.438834 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d44f6ddf-xbtg4"] Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.438907 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-5wlqh"] Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.438985 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-9zfdd"] Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.439062 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-5r9pr"] Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.439127 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-s4czg"] Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.439277 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-bg6nr"] Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.439357 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-747b44746d-pdh68"] Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.439420 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-89cvg"] Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.439494 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-w2r2c"] Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.439565 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-wkjhb"] Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.438953 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.439713 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-nkx6r"] Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.439801 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-c8s6j"] Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.439879 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-2rnqz"] Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.439940 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-bnqhm"] Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.447345 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-jcv4b"] Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.447526 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-bnqhm" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.450533 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-qq2q6"] Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.450688 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-jcv4b" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.453111 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-8zvdp"] Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.453421 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-qq2q6" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.460340 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.474611 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-4vjlk"] Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.474773 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-8zvdp" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.479383 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-4vjlk" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.479242 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-f7rf5"] Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.482974 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-g77tt"] Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.483329 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-f7rf5" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.486314 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-74545575db-p65gs"] Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.486468 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-g77tt" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.490800 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-t9lqw"] Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.490948 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-74545575db-p65gs" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.493969 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482560-6bcnb"] Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.494058 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-t9lqw" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.496522 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-jcv4b"] Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.496547 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-bnqhm"] Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.496557 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-qq2q6"] Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.496567 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-4vjlk"] Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.496602 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-8zvdp"] Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.496610 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-74545575db-p65gs"] Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.496620 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-t9lqw"] Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.496648 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482560-6bcnb"] Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.497134 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482560-6bcnb" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.498963 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.519322 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.529722 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bd4c6f53-d565-473d-9d09-b5190fa3d71a-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-6d476\" (UID: \"bd4c6f53-d565-473d-9d09-b5190fa3d71a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-6d476" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.529849 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8qvfj\" (UniqueName: \"kubernetes.io/projected/39000d94-11b8-42ff-a127-2136d0f2cc0b-kube-api-access-8qvfj\") pod \"dns-operator-799b87ffcd-jnbtq\" (UID: \"39000d94-11b8-42ff-a127-2136d0f2cc0b\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-jnbtq" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.529935 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65be3f94-f1d5-4ebb-933f-216e1650f309-serving-cert\") pod \"kube-apiserver-operator-575994946d-c8s6j\" (UID: \"65be3f94-f1d5-4ebb-933f-216e1650f309\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-c8s6j" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.530019 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f2cd766-1945-4e3d-aa8a-4045eacb2ff8-config\") pod \"openshift-controller-manager-operator-686468bdd5-89cvg\" (UID: \"1f2cd766-1945-4e3d-aa8a-4045eacb2ff8\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-89cvg" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.530093 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/940d93fe-9ecf-4274-9caf-6123a0ce203c-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-w2r2c\" (UID: \"940d93fe-9ecf-4274-9caf-6123a0ce203c\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-w2r2c" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.530248 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-smrwt\" (UniqueName: \"kubernetes.io/projected/1f2cd766-1945-4e3d-aa8a-4045eacb2ff8-kube-api-access-smrwt\") pod \"openshift-controller-manager-operator-686468bdd5-89cvg\" (UID: \"1f2cd766-1945-4e3d-aa8a-4045eacb2ff8\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-89cvg" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.530334 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2xlj8\" (UniqueName: \"kubernetes.io/projected/4d8423b3-e68c-4083-859f-e89f705f28bd-kube-api-access-2xlj8\") pod \"apiserver-8596bd845d-8t6fc\" (UID: \"4d8423b3-e68c-4083-859f-e89f705f28bd\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-8t6fc" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.530416 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65be3f94-f1d5-4ebb-933f-216e1650f309-config\") pod \"kube-apiserver-operator-575994946d-c8s6j\" (UID: \"65be3f94-f1d5-4ebb-933f-216e1650f309\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-c8s6j" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.530507 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/39000d94-11b8-42ff-a127-2136d0f2cc0b-metrics-tls\") pod \"dns-operator-799b87ffcd-jnbtq\" (UID: \"39000d94-11b8-42ff-a127-2136d0f2cc0b\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-jnbtq" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.530690 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6c9c3fd8-7f40-4eb4-9a40-6e13fec6579f-serving-cert\") pod \"openshift-config-operator-5777786469-jdqmz\" (UID: \"6c9c3fd8-7f40-4eb4-9a40-6e13fec6579f\") " pod="openshift-config-operator/openshift-config-operator-5777786469-jdqmz" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.530783 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d042745d-98a0-44c8-ac92-7704d8b43b84-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-tgjx7\" (UID: \"d042745d-98a0-44c8-ac92-7704d8b43b84\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-tgjx7" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.530936 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vtmdm\" (UniqueName: \"kubernetes.io/projected/940d93fe-9ecf-4274-9caf-6123a0ce203c-kube-api-access-vtmdm\") pod \"ingress-operator-6b9cb4dbcf-w2r2c\" (UID: \"940d93fe-9ecf-4274-9caf-6123a0ce203c\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-w2r2c" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.531043 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f2cd766-1945-4e3d-aa8a-4045eacb2ff8-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-89cvg\" (UID: \"1f2cd766-1945-4e3d-aa8a-4045eacb2ff8\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-89cvg" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.531123 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4d8423b3-e68c-4083-859f-e89f705f28bd-audit-policies\") pod \"apiserver-8596bd845d-8t6fc\" (UID: \"4d8423b3-e68c-4083-859f-e89f705f28bd\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-8t6fc" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.531249 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd4c6f53-d565-473d-9d09-b5190fa3d71a-config\") pod \"kube-controller-manager-operator-69d5f845f8-6d476\" (UID: \"bd4c6f53-d565-473d-9d09-b5190fa3d71a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-6d476" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.531988 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-j5k9z\" (UniqueName: \"kubernetes.io/projected/ae767afd-59d5-4c04-9ecc-f9ae7b317698-kube-api-access-j5k9z\") pod \"image-pruner-29482560-n7qwb\" (UID: \"ae767afd-59d5-4c04-9ecc-f9ae7b317698\") " pod="openshift-image-registry/image-pruner-29482560-n7qwb" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.532224 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4d8423b3-e68c-4083-859f-e89f705f28bd-etcd-serving-ca\") pod \"apiserver-8596bd845d-8t6fc\" (UID: \"4d8423b3-e68c-4083-859f-e89f705f28bd\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-8t6fc" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.532351 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4d8423b3-e68c-4083-859f-e89f705f28bd-etcd-client\") pod \"apiserver-8596bd845d-8t6fc\" (UID: \"4d8423b3-e68c-4083-859f-e89f705f28bd\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-8t6fc" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.532472 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6d7mc\" (UniqueName: \"kubernetes.io/projected/f2431df6-6390-4fb8-b13e-56750ad2fed4-kube-api-access-6d7mc\") pod \"downloads-747b44746d-pdh68\" (UID: \"f2431df6-6390-4fb8-b13e-56750ad2fed4\") " pod="openshift-console/downloads-747b44746d-pdh68" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.532614 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/940d93fe-9ecf-4274-9caf-6123a0ce203c-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-w2r2c\" (UID: \"940d93fe-9ecf-4274-9caf-6123a0ce203c\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-w2r2c" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.532731 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d042745d-98a0-44c8-ac92-7704d8b43b84-config\") pod \"openshift-kube-scheduler-operator-54f497555d-tgjx7\" (UID: \"d042745d-98a0-44c8-ac92-7704d8b43b84\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-tgjx7" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.532832 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d042745d-98a0-44c8-ac92-7704d8b43b84-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-tgjx7\" (UID: \"d042745d-98a0-44c8-ac92-7704d8b43b84\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-tgjx7" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.532954 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bd4c6f53-d565-473d-9d09-b5190fa3d71a-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-6d476\" (UID: \"bd4c6f53-d565-473d-9d09-b5190fa3d71a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-6d476" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.533063 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/65be3f94-f1d5-4ebb-933f-216e1650f309-tmp-dir\") pod \"kube-apiserver-operator-575994946d-c8s6j\" (UID: \"65be3f94-f1d5-4ebb-933f-216e1650f309\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-c8s6j" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.533244 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4brfs\" (UniqueName: \"kubernetes.io/projected/ba04635f-4c5f-4669-af58-97627beae1b2-kube-api-access-4brfs\") pod \"console-operator-67c89758df-5ds28\" (UID: \"ba04635f-4c5f-4669-af58-97627beae1b2\") " pod="openshift-console-operator/console-operator-67c89758df-5ds28" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.533896 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4d8423b3-e68c-4083-859f-e89f705f28bd-audit-dir\") pod \"apiserver-8596bd845d-8t6fc\" (UID: \"4d8423b3-e68c-4083-859f-e89f705f28bd\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-8t6fc" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.534432 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/65be3f94-f1d5-4ebb-933f-216e1650f309-kube-api-access\") pod \"kube-apiserver-operator-575994946d-c8s6j\" (UID: \"65be3f94-f1d5-4ebb-933f-216e1650f309\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-c8s6j" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.534533 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1f2cd766-1945-4e3d-aa8a-4045eacb2ff8-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-89cvg\" (UID: \"1f2cd766-1945-4e3d-aa8a-4045eacb2ff8\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-89cvg" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.534610 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4d8423b3-e68c-4083-859f-e89f705f28bd-serving-cert\") pod \"apiserver-8596bd845d-8t6fc\" (UID: \"4d8423b3-e68c-4083-859f-e89f705f28bd\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-8t6fc" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.534696 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba04635f-4c5f-4669-af58-97627beae1b2-serving-cert\") pod \"console-operator-67c89758df-5ds28\" (UID: \"ba04635f-4c5f-4669-af58-97627beae1b2\") " pod="openshift-console-operator/console-operator-67c89758df-5ds28" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.534790 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba04635f-4c5f-4669-af58-97627beae1b2-config\") pod \"console-operator-67c89758df-5ds28\" (UID: \"ba04635f-4c5f-4669-af58-97627beae1b2\") " pod="openshift-console-operator/console-operator-67c89758df-5ds28" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.534868 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/609947aa-6e7c-439b-a7a7-7d06f8ab4f1c-config\") pod \"machine-approver-54c688565-cnnpj\" (UID: \"609947aa-6e7c-439b-a7a7-7d06f8ab4f1c\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-cnnpj" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.534944 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4d8423b3-e68c-4083-859f-e89f705f28bd-encryption-config\") pod \"apiserver-8596bd845d-8t6fc\" (UID: \"4d8423b3-e68c-4083-859f-e89f705f28bd\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-8t6fc" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.535026 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/bd4c6f53-d565-473d-9d09-b5190fa3d71a-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-6d476\" (UID: \"bd4c6f53-d565-473d-9d09-b5190fa3d71a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-6d476" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.535463 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/609947aa-6e7c-439b-a7a7-7d06f8ab4f1c-auth-proxy-config\") pod \"machine-approver-54c688565-cnnpj\" (UID: \"609947aa-6e7c-439b-a7a7-7d06f8ab4f1c\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-cnnpj" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.535547 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xmt9z\" (UniqueName: \"kubernetes.io/projected/609947aa-6e7c-439b-a7a7-7d06f8ab4f1c-kube-api-access-xmt9z\") pod \"machine-approver-54c688565-cnnpj\" (UID: \"609947aa-6e7c-439b-a7a7-7d06f8ab4f1c\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-cnnpj" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.535669 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4d8423b3-e68c-4083-859f-e89f705f28bd-trusted-ca-bundle\") pod \"apiserver-8596bd845d-8t6fc\" (UID: \"4d8423b3-e68c-4083-859f-e89f705f28bd\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-8t6fc" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.535750 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/940d93fe-9ecf-4274-9caf-6123a0ce203c-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-w2r2c\" (UID: \"940d93fe-9ecf-4274-9caf-6123a0ce203c\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-w2r2c" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.535830 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/39000d94-11b8-42ff-a127-2136d0f2cc0b-tmp-dir\") pod \"dns-operator-799b87ffcd-jnbtq\" (UID: \"39000d94-11b8-42ff-a127-2136d0f2cc0b\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-jnbtq" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.535902 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/ae767afd-59d5-4c04-9ecc-f9ae7b317698-serviceca\") pod \"image-pruner-29482560-n7qwb\" (UID: \"ae767afd-59d5-4c04-9ecc-f9ae7b317698\") " pod="openshift-image-registry/image-pruner-29482560-n7qwb" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.535965 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d042745d-98a0-44c8-ac92-7704d8b43b84-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-tgjx7\" (UID: \"d042745d-98a0-44c8-ac92-7704d8b43b84\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-tgjx7" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.536038 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/609947aa-6e7c-439b-a7a7-7d06f8ab4f1c-machine-approver-tls\") pod \"machine-approver-54c688565-cnnpj\" (UID: \"609947aa-6e7c-439b-a7a7-7d06f8ab4f1c\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-cnnpj" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.536115 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/6c9c3fd8-7f40-4eb4-9a40-6e13fec6579f-available-featuregates\") pod \"openshift-config-operator-5777786469-jdqmz\" (UID: \"6c9c3fd8-7f40-4eb4-9a40-6e13fec6579f\") " pod="openshift-config-operator/openshift-config-operator-5777786469-jdqmz" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.536205 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vp5z8\" (UniqueName: \"kubernetes.io/projected/6c9c3fd8-7f40-4eb4-9a40-6e13fec6579f-kube-api-access-vp5z8\") pod \"openshift-config-operator-5777786469-jdqmz\" (UID: \"6c9c3fd8-7f40-4eb4-9a40-6e13fec6579f\") " pod="openshift-config-operator/openshift-config-operator-5777786469-jdqmz" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.534051 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4d8423b3-e68c-4083-859f-e89f705f28bd-audit-dir\") pod \"apiserver-8596bd845d-8t6fc\" (UID: \"4d8423b3-e68c-4083-859f-e89f705f28bd\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-8t6fc" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.536311 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba04635f-4c5f-4669-af58-97627beae1b2-config\") pod \"console-operator-67c89758df-5ds28\" (UID: \"ba04635f-4c5f-4669-af58-97627beae1b2\") " pod="openshift-console-operator/console-operator-67c89758df-5ds28" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.536331 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/609947aa-6e7c-439b-a7a7-7d06f8ab4f1c-config\") pod \"machine-approver-54c688565-cnnpj\" (UID: \"609947aa-6e7c-439b-a7a7-7d06f8ab4f1c\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-cnnpj" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.531941 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd4c6f53-d565-473d-9d09-b5190fa3d71a-config\") pod \"kube-controller-manager-operator-69d5f845f8-6d476\" (UID: \"bd4c6f53-d565-473d-9d09-b5190fa3d71a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-6d476" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.534820 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1f2cd766-1945-4e3d-aa8a-4045eacb2ff8-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-89cvg\" (UID: \"1f2cd766-1945-4e3d-aa8a-4045eacb2ff8\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-89cvg" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.536620 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/39000d94-11b8-42ff-a127-2136d0f2cc0b-tmp-dir\") pod \"dns-operator-799b87ffcd-jnbtq\" (UID: \"39000d94-11b8-42ff-a127-2136d0f2cc0b\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-jnbtq" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.535781 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bd4c6f53-d565-473d-9d09-b5190fa3d71a-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-6d476\" (UID: \"bd4c6f53-d565-473d-9d09-b5190fa3d71a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-6d476" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.533771 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/65be3f94-f1d5-4ebb-933f-216e1650f309-tmp-dir\") pod \"kube-apiserver-operator-575994946d-c8s6j\" (UID: \"65be3f94-f1d5-4ebb-933f-216e1650f309\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-c8s6j" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.536662 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/bd4c6f53-d565-473d-9d09-b5190fa3d71a-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-6d476\" (UID: \"bd4c6f53-d565-473d-9d09-b5190fa3d71a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-6d476" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.536850 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4d8423b3-e68c-4083-859f-e89f705f28bd-etcd-client\") pod \"apiserver-8596bd845d-8t6fc\" (UID: \"4d8423b3-e68c-4083-859f-e89f705f28bd\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-8t6fc" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.536977 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/6c9c3fd8-7f40-4eb4-9a40-6e13fec6579f-available-featuregates\") pod \"openshift-config-operator-5777786469-jdqmz\" (UID: \"6c9c3fd8-7f40-4eb4-9a40-6e13fec6579f\") " pod="openshift-config-operator/openshift-config-operator-5777786469-jdqmz" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.537141 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/609947aa-6e7c-439b-a7a7-7d06f8ab4f1c-auth-proxy-config\") pod \"machine-approver-54c688565-cnnpj\" (UID: \"609947aa-6e7c-439b-a7a7-7d06f8ab4f1c\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-cnnpj" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.532803 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4d8423b3-e68c-4083-859f-e89f705f28bd-etcd-serving-ca\") pod \"apiserver-8596bd845d-8t6fc\" (UID: \"4d8423b3-e68c-4083-859f-e89f705f28bd\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-8t6fc" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.534461 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6c9c3fd8-7f40-4eb4-9a40-6e13fec6579f-serving-cert\") pod \"openshift-config-operator-5777786469-jdqmz\" (UID: \"6c9c3fd8-7f40-4eb4-9a40-6e13fec6579f\") " pod="openshift-config-operator/openshift-config-operator-5777786469-jdqmz" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.537258 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ba04635f-4c5f-4669-af58-97627beae1b2-trusted-ca\") pod \"console-operator-67c89758df-5ds28\" (UID: \"ba04635f-4c5f-4669-af58-97627beae1b2\") " pod="openshift-console-operator/console-operator-67c89758df-5ds28" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.537316 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4d8423b3-e68c-4083-859f-e89f705f28bd-serving-cert\") pod \"apiserver-8596bd845d-8t6fc\" (UID: \"4d8423b3-e68c-4083-859f-e89f705f28bd\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-8t6fc" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.531807 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4d8423b3-e68c-4083-859f-e89f705f28bd-audit-policies\") pod \"apiserver-8596bd845d-8t6fc\" (UID: \"4d8423b3-e68c-4083-859f-e89f705f28bd\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-8t6fc" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.537431 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d042745d-98a0-44c8-ac92-7704d8b43b84-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-tgjx7\" (UID: \"d042745d-98a0-44c8-ac92-7704d8b43b84\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-tgjx7" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.537982 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/ae767afd-59d5-4c04-9ecc-f9ae7b317698-serviceca\") pod \"image-pruner-29482560-n7qwb\" (UID: \"ae767afd-59d5-4c04-9ecc-f9ae7b317698\") " pod="openshift-image-registry/image-pruner-29482560-n7qwb" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.538130 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ba04635f-4c5f-4669-af58-97627beae1b2-trusted-ca\") pod \"console-operator-67c89758df-5ds28\" (UID: \"ba04635f-4c5f-4669-af58-97627beae1b2\") " pod="openshift-console-operator/console-operator-67c89758df-5ds28" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.539198 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.539388 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4d8423b3-e68c-4083-859f-e89f705f28bd-encryption-config\") pod \"apiserver-8596bd845d-8t6fc\" (UID: \"4d8423b3-e68c-4083-859f-e89f705f28bd\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-8t6fc" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.539819 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4d8423b3-e68c-4083-859f-e89f705f28bd-trusted-ca-bundle\") pod \"apiserver-8596bd845d-8t6fc\" (UID: \"4d8423b3-e68c-4083-859f-e89f705f28bd\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-8t6fc" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.539982 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/609947aa-6e7c-439b-a7a7-7d06f8ab4f1c-machine-approver-tls\") pod \"machine-approver-54c688565-cnnpj\" (UID: \"609947aa-6e7c-439b-a7a7-7d06f8ab4f1c\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-cnnpj" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.541069 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba04635f-4c5f-4669-af58-97627beae1b2-serving-cert\") pod \"console-operator-67c89758df-5ds28\" (UID: \"ba04635f-4c5f-4669-af58-97627beae1b2\") " pod="openshift-console-operator/console-operator-67c89758df-5ds28" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.562664 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.579727 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.599106 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.621199 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.639054 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.659145 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.682068 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-client\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.699299 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-config\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.743619 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.748538 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.770875 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.775068 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f2cd766-1945-4e3d-aa8a-4045eacb2ff8-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-89cvg\" (UID: \"1f2cd766-1945-4e3d-aa8a-4045eacb2ff8\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-89cvg" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.781233 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.798965 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.800973 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f2cd766-1945-4e3d-aa8a-4045eacb2ff8-config\") pod \"openshift-controller-manager-operator-686468bdd5-89cvg\" (UID: \"1f2cd766-1945-4e3d-aa8a-4045eacb2ff8\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-89cvg" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.819058 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.838983 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"metrics-tls\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.845564 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/39000d94-11b8-42ff-a127-2136d0f2cc0b-metrics-tls\") pod \"dns-operator-799b87ffcd-jnbtq\" (UID: \"39000d94-11b8-42ff-a127-2136d0f2cc0b\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-jnbtq" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.859463 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.880644 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.900402 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.920883 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.926407 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d042745d-98a0-44c8-ac92-7704d8b43b84-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-tgjx7\" (UID: \"d042745d-98a0-44c8-ac92-7704d8b43b84\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-tgjx7" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.940744 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.960417 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\"" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.964725 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d042745d-98a0-44c8-ac92-7704d8b43b84-config\") pod \"openshift-kube-scheduler-operator-54f497555d-tgjx7\" (UID: \"d042745d-98a0-44c8-ac92-7704d8b43b84\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-tgjx7" Jan 21 00:11:12 crc kubenswrapper[5118]: I0121 00:11:12.980235 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Jan 21 00:11:13 crc kubenswrapper[5118]: I0121 00:11:13.000196 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"metrics-tls\"" Jan 21 00:11:13 crc kubenswrapper[5118]: I0121 00:11:13.013733 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/940d93fe-9ecf-4274-9caf-6123a0ce203c-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-w2r2c\" (UID: \"940d93fe-9ecf-4274-9caf-6123a0ce203c\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-w2r2c" Jan 21 00:11:13 crc kubenswrapper[5118]: I0121 00:11:13.031916 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"trusted-ca\"" Jan 21 00:11:13 crc kubenswrapper[5118]: I0121 00:11:13.035421 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/940d93fe-9ecf-4274-9caf-6123a0ce203c-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-w2r2c\" (UID: \"940d93fe-9ecf-4274-9caf-6123a0ce203c\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-w2r2c" Jan 21 00:11:13 crc kubenswrapper[5118]: I0121 00:11:13.040461 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Jan 21 00:11:13 crc kubenswrapper[5118]: I0121 00:11:13.060295 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Jan 21 00:11:13 crc kubenswrapper[5118]: I0121 00:11:13.080797 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Jan 21 00:11:13 crc kubenswrapper[5118]: I0121 00:11:13.099604 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Jan 21 00:11:13 crc kubenswrapper[5118]: I0121 00:11:13.119749 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Jan 21 00:11:13 crc kubenswrapper[5118]: I0121 00:11:13.124190 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65be3f94-f1d5-4ebb-933f-216e1650f309-serving-cert\") pod \"kube-apiserver-operator-575994946d-c8s6j\" (UID: \"65be3f94-f1d5-4ebb-933f-216e1650f309\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-c8s6j" Jan 21 00:11:13 crc kubenswrapper[5118]: I0121 00:11:13.139705 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Jan 21 00:11:13 crc kubenswrapper[5118]: I0121 00:11:13.141085 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65be3f94-f1d5-4ebb-933f-216e1650f309-config\") pod \"kube-apiserver-operator-575994946d-c8s6j\" (UID: \"65be3f94-f1d5-4ebb-933f-216e1650f309\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-c8s6j" Jan 21 00:11:13 crc kubenswrapper[5118]: I0121 00:11:13.160638 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Jan 21 00:11:13 crc kubenswrapper[5118]: I0121 00:11:13.180786 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Jan 21 00:11:13 crc kubenswrapper[5118]: I0121 00:11:13.200020 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Jan 21 00:11:13 crc kubenswrapper[5118]: I0121 00:11:13.219711 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Jan 21 00:11:13 crc kubenswrapper[5118]: I0121 00:11:13.239794 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Jan 21 00:11:13 crc kubenswrapper[5118]: I0121 00:11:13.280572 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Jan 21 00:11:13 crc kubenswrapper[5118]: I0121 00:11:13.301008 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Jan 21 00:11:13 crc kubenswrapper[5118]: I0121 00:11:13.319771 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Jan 21 00:11:13 crc kubenswrapper[5118]: I0121 00:11:13.341312 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Jan 21 00:11:13 crc kubenswrapper[5118]: I0121 00:11:13.360701 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-certs-default\"" Jan 21 00:11:13 crc kubenswrapper[5118]: I0121 00:11:13.380515 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Jan 21 00:11:13 crc kubenswrapper[5118]: I0121 00:11:13.397534 5118 request.go:752] "Waited before sending request" delay="1.018697217s" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/configmaps?fieldSelector=metadata.name%3Dservice-ca-bundle&limit=500&resourceVersion=0" Jan 21 00:11:13 crc kubenswrapper[5118]: I0121 00:11:13.399405 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Jan 21 00:11:13 crc kubenswrapper[5118]: I0121 00:11:13.420180 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Jan 21 00:11:13 crc kubenswrapper[5118]: I0121 00:11:13.440039 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\"" Jan 21 00:11:13 crc kubenswrapper[5118]: I0121 00:11:13.460376 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\"" Jan 21 00:11:13 crc kubenswrapper[5118]: I0121 00:11:13.480758 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Jan 21 00:11:13 crc kubenswrapper[5118]: I0121 00:11:13.499869 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\"" Jan 21 00:11:13 crc kubenswrapper[5118]: I0121 00:11:13.519716 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Jan 21 00:11:13 crc kubenswrapper[5118]: I0121 00:11:13.539853 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Jan 21 00:11:13 crc kubenswrapper[5118]: I0121 00:11:13.559842 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Jan 21 00:11:13 crc kubenswrapper[5118]: I0121 00:11:13.580377 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Jan 21 00:11:13 crc kubenswrapper[5118]: I0121 00:11:13.600875 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Jan 21 00:11:13 crc kubenswrapper[5118]: I0121 00:11:13.620538 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Jan 21 00:11:13 crc kubenswrapper[5118]: I0121 00:11:13.639849 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-admission-controller-secret\"" Jan 21 00:11:13 crc kubenswrapper[5118]: I0121 00:11:13.694575 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdj49\" (UniqueName: \"kubernetes.io/projected/0d6c776b-eaf0-4068-983b-d848bbc96323-kube-api-access-kdj49\") pod \"cluster-samples-operator-6b564684c8-mn7c4\" (UID: \"0d6c776b-eaf0-4068-983b-d848bbc96323\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-mn7c4" Jan 21 00:11:13 crc kubenswrapper[5118]: I0121 00:11:13.716261 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5r4q\" (UniqueName: \"kubernetes.io/projected/c1f2bd9d-a01b-4672-b4b1-f88057b52f08-kube-api-access-c5r4q\") pod \"controller-manager-65b6cccf98-trbkq\" (UID: \"c1f2bd9d-a01b-4672-b4b1-f88057b52f08\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-trbkq" Jan 21 00:11:13 crc kubenswrapper[5118]: I0121 00:11:13.718622 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-trbkq" Jan 21 00:11:13 crc kubenswrapper[5118]: I0121 00:11:13.730393 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-btb6d\" (UniqueName: \"kubernetes.io/projected/4580359c-1bab-4bde-a783-bc3866e460a0-kube-api-access-btb6d\") pod \"openshift-apiserver-operator-846cbfc458-c5g8n\" (UID: \"4580359c-1bab-4bde-a783-bc3866e460a0\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-c5g8n" Jan 21 00:11:13 crc kubenswrapper[5118]: I0121 00:11:13.740468 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\"" Jan 21 00:11:13 crc kubenswrapper[5118]: I0121 00:11:13.753875 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mq2ng\" (UniqueName: \"kubernetes.io/projected/6796e6ff-3d28-4061-a0b4-cd8088da6919-kube-api-access-mq2ng\") pod \"apiserver-9ddfb9f55-7lpxz\" (UID: \"6796e6ff-3d28-4061-a0b4-cd8088da6919\") " pod="openshift-apiserver/apiserver-9ddfb9f55-7lpxz" Jan 21 00:11:13 crc kubenswrapper[5118]: I0121 00:11:13.780251 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\"" Jan 21 00:11:13 crc kubenswrapper[5118]: I0121 00:11:13.788791 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgpcn\" (UniqueName: \"kubernetes.io/projected/1202d380-a207-455c-8bd8-2b82e7974afa-kube-api-access-rgpcn\") pod \"machine-api-operator-755bb95488-r2gm9\" (UID: \"1202d380-a207-455c-8bd8-2b82e7974afa\") " pod="openshift-machine-api/machine-api-operator-755bb95488-r2gm9" Jan 21 00:11:13 crc kubenswrapper[5118]: I0121 00:11:13.799697 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Jan 21 00:11:13 crc kubenswrapper[5118]: I0121 00:11:13.839400 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-25pkl\" (UniqueName: \"kubernetes.io/projected/79567875-e72c-4685-8919-03cda9a6f644-kube-api-access-25pkl\") pod \"authentication-operator-7f5c659b84-5zl5v\" (UID: \"79567875-e72c-4685-8919-03cda9a6f644\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-5zl5v" Jan 21 00:11:13 crc kubenswrapper[5118]: I0121 00:11:13.840406 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-mn7c4" Jan 21 00:11:13 crc kubenswrapper[5118]: I0121 00:11:13.846011 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9ddfb9f55-7lpxz" Jan 21 00:11:13 crc kubenswrapper[5118]: I0121 00:11:13.856479 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-c5g8n" Jan 21 00:11:13 crc kubenswrapper[5118]: I0121 00:11:13.871349 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xz6wz\" (UniqueName: \"kubernetes.io/projected/19280e75-8f04-47d1-bc42-124082dfd247-kube-api-access-xz6wz\") pod \"oauth-openshift-66458b6674-5gv2n\" (UID: \"19280e75-8f04-47d1-bc42-124082dfd247\") " pod="openshift-authentication/oauth-openshift-66458b6674-5gv2n" Jan 21 00:11:13 crc kubenswrapper[5118]: I0121 00:11:13.880510 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Jan 21 00:11:13 crc kubenswrapper[5118]: I0121 00:11:13.882757 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jct6m\" (UniqueName: \"kubernetes.io/projected/1968a714-512b-40f9-a302-f8905b0855fd-kube-api-access-jct6m\") pod \"route-controller-manager-776cdc94d6-s55xm\" (UID: \"1968a714-512b-40f9-a302-f8905b0855fd\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-s55xm" Jan 21 00:11:13 crc kubenswrapper[5118]: I0121 00:11:13.900191 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\"" Jan 21 00:11:13 crc kubenswrapper[5118]: I0121 00:11:13.919934 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\"" Jan 21 00:11:13 crc kubenswrapper[5118]: I0121 00:11:13.929304 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-trbkq"] Jan 21 00:11:13 crc kubenswrapper[5118]: I0121 00:11:13.940205 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Jan 21 00:11:13 crc kubenswrapper[5118]: W0121 00:11:13.940589 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc1f2bd9d_a01b_4672_b4b1_f88057b52f08.slice/crio-d8162e1dd50bbad0a7190bf92c4e6bafd215436bd2edeeebefcbed69bfebc413 WatchSource:0}: Error finding container d8162e1dd50bbad0a7190bf92c4e6bafd215436bd2edeeebefcbed69bfebc413: Status 404 returned error can't find the container with id d8162e1dd50bbad0a7190bf92c4e6bafd215436bd2edeeebefcbed69bfebc413 Jan 21 00:11:13 crc kubenswrapper[5118]: I0121 00:11:13.966784 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Jan 21 00:11:13 crc kubenswrapper[5118]: I0121 00:11:13.980033 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Jan 21 00:11:14 crc kubenswrapper[5118]: I0121 00:11:14.001769 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Jan 21 00:11:14 crc kubenswrapper[5118]: I0121 00:11:14.019719 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-mn7c4"] Jan 21 00:11:14 crc kubenswrapper[5118]: I0121 00:11:14.020292 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Jan 21 00:11:14 crc kubenswrapper[5118]: I0121 00:11:14.039046 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\"" Jan 21 00:11:14 crc kubenswrapper[5118]: I0121 00:11:14.052118 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-7lpxz"] Jan 21 00:11:14 crc kubenswrapper[5118]: I0121 00:11:14.059031 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Jan 21 00:11:14 crc kubenswrapper[5118]: W0121 00:11:14.060826 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6796e6ff_3d28_4061_a0b4_cd8088da6919.slice/crio-7ce04305186be3924510ce0bbb3b37aed05c898e01d4de56d8184c62d1985566 WatchSource:0}: Error finding container 7ce04305186be3924510ce0bbb3b37aed05c898e01d4de56d8184c62d1985566: Status 404 returned error can't find the container with id 7ce04305186be3924510ce0bbb3b37aed05c898e01d4de56d8184c62d1985566 Jan 21 00:11:14 crc kubenswrapper[5118]: I0121 00:11:14.067954 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-c5g8n"] Jan 21 00:11:14 crc kubenswrapper[5118]: I0121 00:11:14.069510 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-755bb95488-r2gm9" Jan 21 00:11:14 crc kubenswrapper[5118]: W0121 00:11:14.076867 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4580359c_1bab_4bde_a783_bc3866e460a0.slice/crio-485c7d1ad0a231a6ba76dac65f831dc3a0b96a8fc404eee8ad583992139db29c WatchSource:0}: Error finding container 485c7d1ad0a231a6ba76dac65f831dc3a0b96a8fc404eee8ad583992139db29c: Status 404 returned error can't find the container with id 485c7d1ad0a231a6ba76dac65f831dc3a0b96a8fc404eee8ad583992139db29c Jan 21 00:11:14 crc kubenswrapper[5118]: I0121 00:11:14.080273 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Jan 21 00:11:14 crc kubenswrapper[5118]: I0121 00:11:14.086419 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-5gv2n" Jan 21 00:11:14 crc kubenswrapper[5118]: I0121 00:11:14.098276 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-s55xm" Jan 21 00:11:14 crc kubenswrapper[5118]: I0121 00:11:14.100499 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Jan 21 00:11:14 crc kubenswrapper[5118]: I0121 00:11:14.119514 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Jan 21 00:11:14 crc kubenswrapper[5118]: I0121 00:11:14.130344 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-5zl5v" Jan 21 00:11:14 crc kubenswrapper[5118]: I0121 00:11:14.139757 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Jan 21 00:11:14 crc kubenswrapper[5118]: I0121 00:11:14.162543 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Jan 21 00:11:14 crc kubenswrapper[5118]: I0121 00:11:14.181696 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Jan 21 00:11:14 crc kubenswrapper[5118]: I0121 00:11:14.200980 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Jan 21 00:11:14 crc kubenswrapper[5118]: I0121 00:11:14.219412 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\"" Jan 21 00:11:14 crc kubenswrapper[5118]: I0121 00:11:14.241589 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Jan 21 00:11:14 crc kubenswrapper[5118]: I0121 00:11:14.264271 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Jan 21 00:11:14 crc kubenswrapper[5118]: I0121 00:11:14.264915 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-r2gm9"] Jan 21 00:11:14 crc kubenswrapper[5118]: W0121 00:11:14.277886 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1202d380_a207_455c_8bd8_2b82e7974afa.slice/crio-0061ac2c06b54dab0e67148d9b4971c4981cdf3f723a41a796af5047028d7e93 WatchSource:0}: Error finding container 0061ac2c06b54dab0e67148d9b4971c4981cdf3f723a41a796af5047028d7e93: Status 404 returned error can't find the container with id 0061ac2c06b54dab0e67148d9b4971c4981cdf3f723a41a796af5047028d7e93 Jan 21 00:11:14 crc kubenswrapper[5118]: I0121 00:11:14.280356 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Jan 21 00:11:14 crc kubenswrapper[5118]: I0121 00:11:14.319574 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Jan 21 00:11:14 crc kubenswrapper[5118]: I0121 00:11:14.331126 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-5gv2n"] Jan 21 00:11:14 crc kubenswrapper[5118]: I0121 00:11:14.343139 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Jan 21 00:11:14 crc kubenswrapper[5118]: I0121 00:11:14.360242 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\"" Jan 21 00:11:14 crc kubenswrapper[5118]: I0121 00:11:14.379734 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\"" Jan 21 00:11:14 crc kubenswrapper[5118]: I0121 00:11:14.392029 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-s55xm"] Jan 21 00:11:14 crc kubenswrapper[5118]: I0121 00:11:14.398660 5118 request.go:752] "Waited before sending request" delay="1.911993203s" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Dcni-sysctl-allowlist&limit=500&resourceVersion=0" Jan 21 00:11:14 crc kubenswrapper[5118]: I0121 00:11:14.400786 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-sysctl-allowlist\"" Jan 21 00:11:14 crc kubenswrapper[5118]: I0121 00:11:14.419298 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Jan 21 00:11:14 crc kubenswrapper[5118]: I0121 00:11:14.439499 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Jan 21 00:11:14 crc kubenswrapper[5118]: I0121 00:11:14.459775 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Jan 21 00:11:14 crc kubenswrapper[5118]: I0121 00:11:14.479883 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Jan 21 00:11:14 crc kubenswrapper[5118]: I0121 00:11:14.499808 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Jan 21 00:11:14 crc kubenswrapper[5118]: I0121 00:11:14.519077 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Jan 21 00:11:14 crc kubenswrapper[5118]: I0121 00:11:14.539498 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Jan 21 00:11:14 crc kubenswrapper[5118]: I0121 00:11:14.549052 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-mn7c4" event={"ID":"0d6c776b-eaf0-4068-983b-d848bbc96323","Type":"ContainerStarted","Data":"83c79e7da5a507feaa7d1e9ad40796f3c462c30d819935faf6df3266c0949fff"} Jan 21 00:11:14 crc kubenswrapper[5118]: I0121 00:11:14.549284 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-mn7c4" event={"ID":"0d6c776b-eaf0-4068-983b-d848bbc96323","Type":"ContainerStarted","Data":"0f0c2ac01de2bd570376acbc84ebd57be9d5a186594026ef810c28144ba30983"} Jan 21 00:11:14 crc kubenswrapper[5118]: I0121 00:11:14.549353 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-mn7c4" event={"ID":"0d6c776b-eaf0-4068-983b-d848bbc96323","Type":"ContainerStarted","Data":"9c3620f39528fd4c10aa2ec81fe72a174c2d13f4c991306486a77c67f590afe1"} Jan 21 00:11:14 crc kubenswrapper[5118]: I0121 00:11:14.554312 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-s55xm" event={"ID":"1968a714-512b-40f9-a302-f8905b0855fd","Type":"ContainerStarted","Data":"9d98f30b316815dcb2ab563446ddac268f764a905a51f5225672c8ffd41210db"} Jan 21 00:11:14 crc kubenswrapper[5118]: I0121 00:11:14.561225 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Jan 21 00:11:14 crc kubenswrapper[5118]: I0121 00:11:14.562015 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-r2gm9" event={"ID":"1202d380-a207-455c-8bd8-2b82e7974afa","Type":"ContainerStarted","Data":"56c4306af862a54dd073bcee97863ba520e0286a21f1d1a2993e50581712bf7c"} Jan 21 00:11:14 crc kubenswrapper[5118]: I0121 00:11:14.562058 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-r2gm9" event={"ID":"1202d380-a207-455c-8bd8-2b82e7974afa","Type":"ContainerStarted","Data":"0061ac2c06b54dab0e67148d9b4971c4981cdf3f723a41a796af5047028d7e93"} Jan 21 00:11:14 crc kubenswrapper[5118]: I0121 00:11:14.562803 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-5zl5v"] Jan 21 00:11:14 crc kubenswrapper[5118]: I0121 00:11:14.566185 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-c5g8n" event={"ID":"4580359c-1bab-4bde-a783-bc3866e460a0","Type":"ContainerStarted","Data":"f446b9a01d22e1a1c5c2d0ecc16d84c5945a4d7df3917e575484da0b3f4e2128"} Jan 21 00:11:14 crc kubenswrapper[5118]: I0121 00:11:14.566232 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-c5g8n" event={"ID":"4580359c-1bab-4bde-a783-bc3866e460a0","Type":"ContainerStarted","Data":"485c7d1ad0a231a6ba76dac65f831dc3a0b96a8fc404eee8ad583992139db29c"} Jan 21 00:11:14 crc kubenswrapper[5118]: I0121 00:11:14.571863 5118 generic.go:358] "Generic (PLEG): container finished" podID="6796e6ff-3d28-4061-a0b4-cd8088da6919" containerID="58ff3be69d72ee29a8bc03bc8a7aaf1a6bde73d3d4b48ed051ca2d6dceb8d513" exitCode=0 Jan 21 00:11:14 crc kubenswrapper[5118]: I0121 00:11:14.571978 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-7lpxz" event={"ID":"6796e6ff-3d28-4061-a0b4-cd8088da6919","Type":"ContainerDied","Data":"58ff3be69d72ee29a8bc03bc8a7aaf1a6bde73d3d4b48ed051ca2d6dceb8d513"} Jan 21 00:11:14 crc kubenswrapper[5118]: I0121 00:11:14.572019 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-7lpxz" event={"ID":"6796e6ff-3d28-4061-a0b4-cd8088da6919","Type":"ContainerStarted","Data":"7ce04305186be3924510ce0bbb3b37aed05c898e01d4de56d8184c62d1985566"} Jan 21 00:11:14 crc kubenswrapper[5118]: I0121 00:11:14.575661 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-5gv2n" event={"ID":"19280e75-8f04-47d1-bc42-124082dfd247","Type":"ContainerStarted","Data":"15606bf42ae247b88c664efe79aa9a26cf4dd7ebf4a45d199bffae59f4c423e2"} Jan 21 00:11:14 crc kubenswrapper[5118]: I0121 00:11:14.578811 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-trbkq" event={"ID":"c1f2bd9d-a01b-4672-b4b1-f88057b52f08","Type":"ContainerStarted","Data":"a994dad9ee702ae7f09b2f2d20f3829fe75ed2d0e6c5a5e9f9644eb3d04682f7"} Jan 21 00:11:14 crc kubenswrapper[5118]: I0121 00:11:14.578852 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-trbkq" event={"ID":"c1f2bd9d-a01b-4672-b4b1-f88057b52f08","Type":"ContainerStarted","Data":"d8162e1dd50bbad0a7190bf92c4e6bafd215436bd2edeeebefcbed69bfebc413"} Jan 21 00:11:14 crc kubenswrapper[5118]: I0121 00:11:14.579382 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-65b6cccf98-trbkq" Jan 21 00:11:14 crc kubenswrapper[5118]: I0121 00:11:14.579507 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Jan 21 00:11:14 crc kubenswrapper[5118]: W0121 00:11:14.588269 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod79567875_e72c_4685_8919_03cda9a6f644.slice/crio-810fe0e8748b3e2f680ed2249ac8f2238743e8476e657ddd502b70ffbdb9d7d0 WatchSource:0}: Error finding container 810fe0e8748b3e2f680ed2249ac8f2238743e8476e657ddd502b70ffbdb9d7d0: Status 404 returned error can't find the container with id 810fe0e8748b3e2f680ed2249ac8f2238743e8476e657ddd502b70ffbdb9d7d0 Jan 21 00:11:14 crc kubenswrapper[5118]: I0121 00:11:14.598886 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Jan 21 00:11:14 crc kubenswrapper[5118]: I0121 00:11:14.650703 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bd4c6f53-d565-473d-9d09-b5190fa3d71a-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-6d476\" (UID: \"bd4c6f53-d565-473d-9d09-b5190fa3d71a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-6d476" Jan 21 00:11:14 crc kubenswrapper[5118]: I0121 00:11:14.653777 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qvfj\" (UniqueName: \"kubernetes.io/projected/39000d94-11b8-42ff-a127-2136d0f2cc0b-kube-api-access-8qvfj\") pod \"dns-operator-799b87ffcd-jnbtq\" (UID: \"39000d94-11b8-42ff-a127-2136d0f2cc0b\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-jnbtq" Jan 21 00:11:14 crc kubenswrapper[5118]: I0121 00:11:14.675221 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/940d93fe-9ecf-4274-9caf-6123a0ce203c-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-w2r2c\" (UID: \"940d93fe-9ecf-4274-9caf-6123a0ce203c\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-w2r2c" Jan 21 00:11:14 crc kubenswrapper[5118]: I0121 00:11:14.694721 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-smrwt\" (UniqueName: \"kubernetes.io/projected/1f2cd766-1945-4e3d-aa8a-4045eacb2ff8-kube-api-access-smrwt\") pod \"openshift-controller-manager-operator-686468bdd5-89cvg\" (UID: \"1f2cd766-1945-4e3d-aa8a-4045eacb2ff8\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-89cvg" Jan 21 00:11:14 crc kubenswrapper[5118]: I0121 00:11:14.721052 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xlj8\" (UniqueName: \"kubernetes.io/projected/4d8423b3-e68c-4083-859f-e89f705f28bd-kube-api-access-2xlj8\") pod \"apiserver-8596bd845d-8t6fc\" (UID: \"4d8423b3-e68c-4083-859f-e89f705f28bd\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-8t6fc" Jan 21 00:11:14 crc kubenswrapper[5118]: I0121 00:11:14.733694 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtmdm\" (UniqueName: \"kubernetes.io/projected/940d93fe-9ecf-4274-9caf-6123a0ce203c-kube-api-access-vtmdm\") pod \"ingress-operator-6b9cb4dbcf-w2r2c\" (UID: \"940d93fe-9ecf-4274-9caf-6123a0ce203c\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-w2r2c" Jan 21 00:11:14 crc kubenswrapper[5118]: I0121 00:11:14.756691 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5k9z\" (UniqueName: \"kubernetes.io/projected/ae767afd-59d5-4c04-9ecc-f9ae7b317698-kube-api-access-j5k9z\") pod \"image-pruner-29482560-n7qwb\" (UID: \"ae767afd-59d5-4c04-9ecc-f9ae7b317698\") " pod="openshift-image-registry/image-pruner-29482560-n7qwb" Jan 21 00:11:14 crc kubenswrapper[5118]: I0121 00:11:14.768783 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29482560-n7qwb" Jan 21 00:11:14 crc kubenswrapper[5118]: I0121 00:11:14.783946 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-6d476" Jan 21 00:11:14 crc kubenswrapper[5118]: I0121 00:11:14.791609 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d042745d-98a0-44c8-ac92-7704d8b43b84-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-tgjx7\" (UID: \"d042745d-98a0-44c8-ac92-7704d8b43b84\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-tgjx7" Jan 21 00:11:14 crc kubenswrapper[5118]: I0121 00:11:14.812470 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-8596bd845d-8t6fc" Jan 21 00:11:14 crc kubenswrapper[5118]: I0121 00:11:14.813721 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6d7mc\" (UniqueName: \"kubernetes.io/projected/f2431df6-6390-4fb8-b13e-56750ad2fed4-kube-api-access-6d7mc\") pod \"downloads-747b44746d-pdh68\" (UID: \"f2431df6-6390-4fb8-b13e-56750ad2fed4\") " pod="openshift-console/downloads-747b44746d-pdh68" Jan 21 00:11:14 crc kubenswrapper[5118]: I0121 00:11:14.826906 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4brfs\" (UniqueName: \"kubernetes.io/projected/ba04635f-4c5f-4669-af58-97627beae1b2-kube-api-access-4brfs\") pod \"console-operator-67c89758df-5ds28\" (UID: \"ba04635f-4c5f-4669-af58-97627beae1b2\") " pod="openshift-console-operator/console-operator-67c89758df-5ds28" Jan 21 00:11:14 crc kubenswrapper[5118]: I0121 00:11:14.839741 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/65be3f94-f1d5-4ebb-933f-216e1650f309-kube-api-access\") pod \"kube-apiserver-operator-575994946d-c8s6j\" (UID: \"65be3f94-f1d5-4ebb-933f-216e1650f309\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-c8s6j" Jan 21 00:11:14 crc kubenswrapper[5118]: I0121 00:11:14.858840 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vp5z8\" (UniqueName: \"kubernetes.io/projected/6c9c3fd8-7f40-4eb4-9a40-6e13fec6579f-kube-api-access-vp5z8\") pod \"openshift-config-operator-5777786469-jdqmz\" (UID: \"6c9c3fd8-7f40-4eb4-9a40-6e13fec6579f\") " pod="openshift-config-operator/openshift-config-operator-5777786469-jdqmz" Jan 21 00:11:14 crc kubenswrapper[5118]: I0121 00:11:14.875871 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmt9z\" (UniqueName: \"kubernetes.io/projected/609947aa-6e7c-439b-a7a7-7d06f8ab4f1c-kube-api-access-xmt9z\") pod \"machine-approver-54c688565-cnnpj\" (UID: \"609947aa-6e7c-439b-a7a7-7d06f8ab4f1c\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-cnnpj" Jan 21 00:11:14 crc kubenswrapper[5118]: I0121 00:11:14.877211 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-747b44746d-pdh68" Jan 21 00:11:14 crc kubenswrapper[5118]: I0121 00:11:14.903224 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-89cvg" Jan 21 00:11:14 crc kubenswrapper[5118]: I0121 00:11:14.909240 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-799b87ffcd-jnbtq" Jan 21 00:11:14 crc kubenswrapper[5118]: I0121 00:11:14.919039 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-tgjx7" Jan 21 00:11:14 crc kubenswrapper[5118]: I0121 00:11:14.928949 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-w2r2c" Jan 21 00:11:14 crc kubenswrapper[5118]: I0121 00:11:14.934569 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-c8s6j" Jan 21 00:11:14 crc kubenswrapper[5118]: I0121 00:11:14.975730 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6jsw\" (UniqueName: \"kubernetes.io/projected/0d503143-f75b-40e6-b0e3-d1bd595a05ae-kube-api-access-t6jsw\") pod \"image-registry-66587d64c8-tlb84\" (UID: \"0d503143-f75b-40e6-b0e3-d1bd595a05ae\") " pod="openshift-image-registry/image-registry-66587d64c8-tlb84" Jan 21 00:11:14 crc kubenswrapper[5118]: I0121 00:11:14.975765 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/40b0f866-cb01-4820-863e-91d46a2fdda1-serving-cert\") pod \"etcd-operator-69b85846b6-cfnkd\" (UID: \"40b0f866-cb01-4820-863e-91d46a2fdda1\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-cfnkd" Jan 21 00:11:14 crc kubenswrapper[5118]: I0121 00:11:14.975789 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tlb84\" (UID: \"0d503143-f75b-40e6-b0e3-d1bd595a05ae\") " pod="openshift-image-registry/image-registry-66587d64c8-tlb84" Jan 21 00:11:14 crc kubenswrapper[5118]: I0121 00:11:14.975817 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9c52d3a9-6263-4edb-9071-1d5dc43c7197-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-9zfdd\" (UID: \"9c52d3a9-6263-4edb-9071-1d5dc43c7197\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-9zfdd" Jan 21 00:11:14 crc kubenswrapper[5118]: I0121 00:11:14.975888 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96762\" (UniqueName: \"kubernetes.io/projected/6f6f802d-add4-4e9d-bbbe-d3fd2eefd62a-kube-api-access-96762\") pod \"console-64d44f6ddf-xbtg4\" (UID: \"6f6f802d-add4-4e9d-bbbe-d3fd2eefd62a\") " pod="openshift-console/console-64d44f6ddf-xbtg4" Jan 21 00:11:14 crc kubenswrapper[5118]: I0121 00:11:14.975909 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6f6f802d-add4-4e9d-bbbe-d3fd2eefd62a-service-ca\") pod \"console-64d44f6ddf-xbtg4\" (UID: \"6f6f802d-add4-4e9d-bbbe-d3fd2eefd62a\") " pod="openshift-console/console-64d44f6ddf-xbtg4" Jan 21 00:11:14 crc kubenswrapper[5118]: I0121 00:11:14.975936 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0d503143-f75b-40e6-b0e3-d1bd595a05ae-bound-sa-token\") pod \"image-registry-66587d64c8-tlb84\" (UID: \"0d503143-f75b-40e6-b0e3-d1bd595a05ae\") " pod="openshift-image-registry/image-registry-66587d64c8-tlb84" Jan 21 00:11:14 crc kubenswrapper[5118]: I0121 00:11:14.975957 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6f6f802d-add4-4e9d-bbbe-d3fd2eefd62a-console-serving-cert\") pod \"console-64d44f6ddf-xbtg4\" (UID: \"6f6f802d-add4-4e9d-bbbe-d3fd2eefd62a\") " pod="openshift-console/console-64d44f6ddf-xbtg4" Jan 21 00:11:14 crc kubenswrapper[5118]: I0121 00:11:14.975976 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6f6f802d-add4-4e9d-bbbe-d3fd2eefd62a-oauth-serving-cert\") pod \"console-64d44f6ddf-xbtg4\" (UID: \"6f6f802d-add4-4e9d-bbbe-d3fd2eefd62a\") " pod="openshift-console/console-64d44f6ddf-xbtg4" Jan 21 00:11:14 crc kubenswrapper[5118]: I0121 00:11:14.976011 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/40b0f866-cb01-4820-863e-91d46a2fdda1-etcd-service-ca\") pod \"etcd-operator-69b85846b6-cfnkd\" (UID: \"40b0f866-cb01-4820-863e-91d46a2fdda1\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-cfnkd" Jan 21 00:11:14 crc kubenswrapper[5118]: I0121 00:11:14.976026 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/40b0f866-cb01-4820-863e-91d46a2fdda1-tmp-dir\") pod \"etcd-operator-69b85846b6-cfnkd\" (UID: \"40b0f866-cb01-4820-863e-91d46a2fdda1\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-cfnkd" Jan 21 00:11:14 crc kubenswrapper[5118]: I0121 00:11:14.976040 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6f6f802d-add4-4e9d-bbbe-d3fd2eefd62a-trusted-ca-bundle\") pod \"console-64d44f6ddf-xbtg4\" (UID: \"6f6f802d-add4-4e9d-bbbe-d3fd2eefd62a\") " pod="openshift-console/console-64d44f6ddf-xbtg4" Jan 21 00:11:14 crc kubenswrapper[5118]: I0121 00:11:14.976066 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/40b0f866-cb01-4820-863e-91d46a2fdda1-etcd-client\") pod \"etcd-operator-69b85846b6-cfnkd\" (UID: \"40b0f866-cb01-4820-863e-91d46a2fdda1\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-cfnkd" Jan 21 00:11:14 crc kubenswrapper[5118]: I0121 00:11:14.976104 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tt55\" (UniqueName: \"kubernetes.io/projected/40b0f866-cb01-4820-863e-91d46a2fdda1-kube-api-access-9tt55\") pod \"etcd-operator-69b85846b6-cfnkd\" (UID: \"40b0f866-cb01-4820-863e-91d46a2fdda1\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-cfnkd" Jan 21 00:11:14 crc kubenswrapper[5118]: I0121 00:11:14.976124 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/9c52d3a9-6263-4edb-9071-1d5dc43c7197-tmp\") pod \"cluster-image-registry-operator-86c45576b9-9zfdd\" (UID: \"9c52d3a9-6263-4edb-9071-1d5dc43c7197\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-9zfdd" Jan 21 00:11:14 crc kubenswrapper[5118]: I0121 00:11:14.976142 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gdsn\" (UniqueName: \"kubernetes.io/projected/9c52d3a9-6263-4edb-9071-1d5dc43c7197-kube-api-access-6gdsn\") pod \"cluster-image-registry-operator-86c45576b9-9zfdd\" (UID: \"9c52d3a9-6263-4edb-9071-1d5dc43c7197\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-9zfdd" Jan 21 00:11:14 crc kubenswrapper[5118]: I0121 00:11:14.976216 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9c52d3a9-6263-4edb-9071-1d5dc43c7197-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-9zfdd\" (UID: \"9c52d3a9-6263-4edb-9071-1d5dc43c7197\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-9zfdd" Jan 21 00:11:14 crc kubenswrapper[5118]: I0121 00:11:14.976256 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/9c52d3a9-6263-4edb-9071-1d5dc43c7197-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-9zfdd\" (UID: \"9c52d3a9-6263-4edb-9071-1d5dc43c7197\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-9zfdd" Jan 21 00:11:14 crc kubenswrapper[5118]: I0121 00:11:14.976283 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0d503143-f75b-40e6-b0e3-d1bd595a05ae-trusted-ca\") pod \"image-registry-66587d64c8-tlb84\" (UID: \"0d503143-f75b-40e6-b0e3-d1bd595a05ae\") " pod="openshift-image-registry/image-registry-66587d64c8-tlb84" Jan 21 00:11:14 crc kubenswrapper[5118]: I0121 00:11:14.976307 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0d503143-f75b-40e6-b0e3-d1bd595a05ae-ca-trust-extracted\") pod \"image-registry-66587d64c8-tlb84\" (UID: \"0d503143-f75b-40e6-b0e3-d1bd595a05ae\") " pod="openshift-image-registry/image-registry-66587d64c8-tlb84" Jan 21 00:11:14 crc kubenswrapper[5118]: I0121 00:11:14.976330 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0d503143-f75b-40e6-b0e3-d1bd595a05ae-registry-certificates\") pod \"image-registry-66587d64c8-tlb84\" (UID: \"0d503143-f75b-40e6-b0e3-d1bd595a05ae\") " pod="openshift-image-registry/image-registry-66587d64c8-tlb84" Jan 21 00:11:14 crc kubenswrapper[5118]: I0121 00:11:14.976354 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6f6f802d-add4-4e9d-bbbe-d3fd2eefd62a-console-oauth-config\") pod \"console-64d44f6ddf-xbtg4\" (UID: \"6f6f802d-add4-4e9d-bbbe-d3fd2eefd62a\") " pod="openshift-console/console-64d44f6ddf-xbtg4" Jan 21 00:11:14 crc kubenswrapper[5118]: I0121 00:11:14.976387 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0d503143-f75b-40e6-b0e3-d1bd595a05ae-installation-pull-secrets\") pod \"image-registry-66587d64c8-tlb84\" (UID: \"0d503143-f75b-40e6-b0e3-d1bd595a05ae\") " pod="openshift-image-registry/image-registry-66587d64c8-tlb84" Jan 21 00:11:14 crc kubenswrapper[5118]: I0121 00:11:14.976409 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40b0f866-cb01-4820-863e-91d46a2fdda1-config\") pod \"etcd-operator-69b85846b6-cfnkd\" (UID: \"40b0f866-cb01-4820-863e-91d46a2fdda1\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-cfnkd" Jan 21 00:11:14 crc kubenswrapper[5118]: I0121 00:11:14.976463 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/9c52d3a9-6263-4edb-9071-1d5dc43c7197-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-9zfdd\" (UID: \"9c52d3a9-6263-4edb-9071-1d5dc43c7197\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-9zfdd" Jan 21 00:11:14 crc kubenswrapper[5118]: I0121 00:11:14.976496 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6f6f802d-add4-4e9d-bbbe-d3fd2eefd62a-console-config\") pod \"console-64d44f6ddf-xbtg4\" (UID: \"6f6f802d-add4-4e9d-bbbe-d3fd2eefd62a\") " pod="openshift-console/console-64d44f6ddf-xbtg4" Jan 21 00:11:14 crc kubenswrapper[5118]: I0121 00:11:14.976537 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/40b0f866-cb01-4820-863e-91d46a2fdda1-etcd-ca\") pod \"etcd-operator-69b85846b6-cfnkd\" (UID: \"40b0f866-cb01-4820-863e-91d46a2fdda1\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-cfnkd" Jan 21 00:11:14 crc kubenswrapper[5118]: I0121 00:11:14.976579 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0d503143-f75b-40e6-b0e3-d1bd595a05ae-registry-tls\") pod \"image-registry-66587d64c8-tlb84\" (UID: \"0d503143-f75b-40e6-b0e3-d1bd595a05ae\") " pod="openshift-image-registry/image-registry-66587d64c8-tlb84" Jan 21 00:11:14 crc kubenswrapper[5118]: E0121 00:11:14.977461 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 00:11:15.477449164 +0000 UTC m=+130.801696182 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tlb84" (UID: "0d503143-f75b-40e6-b0e3-d1bd595a05ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.067351 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-54c688565-cnnpj" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.078572 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-67c89758df-5ds28" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.079057 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.079223 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/271e0654-9d86-4ec1-8c25-d345a8a1eb0a-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-nkx6r\" (UID: \"271e0654-9d86-4ec1-8c25-d345a8a1eb0a\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-nkx6r" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.079246 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/aa4fb550-3d2a-4a19-8b93-c5e54e9b897a-metrics-certs\") pod \"router-default-68cf44c8b8-jrk8q\" (UID: \"aa4fb550-3d2a-4a19-8b93-c5e54e9b897a\") " pod="openshift-ingress/router-default-68cf44c8b8-jrk8q" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.079264 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7c693d48-122b-44a7-8257-f4f312e980aa-config-volume\") pod \"collect-profiles-29482560-6bcnb\" (UID: \"7c693d48-122b-44a7-8257-f4f312e980aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482560-6bcnb" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.079281 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wltq5\" (UniqueName: \"kubernetes.io/projected/cd261347-6a59-453c-836f-31c195e37417-kube-api-access-wltq5\") pod \"dns-default-jcv4b\" (UID: \"cd261347-6a59-453c-836f-31c195e37417\") " pod="openshift-dns/dns-default-jcv4b" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.079296 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fa95ab2-7bd1-49ba-bbe4-31209e7b7a65-config\") pod \"service-ca-operator-5b9c976747-qq2q6\" (UID: \"5fa95ab2-7bd1-49ba-bbe4-31209e7b7a65\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-qq2q6" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.079325 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9tt55\" (UniqueName: \"kubernetes.io/projected/40b0f866-cb01-4820-863e-91d46a2fdda1-kube-api-access-9tt55\") pod \"etcd-operator-69b85846b6-cfnkd\" (UID: \"40b0f866-cb01-4820-863e-91d46a2fdda1\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-cfnkd" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.079375 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5be2081-6a80-4521-a1dd-2e332352f29c-config\") pod \"kube-storage-version-migrator-operator-565b79b866-bg6nr\" (UID: \"a5be2081-6a80-4521-a1dd-2e332352f29c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-bg6nr" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.079428 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/9c52d3a9-6263-4edb-9071-1d5dc43c7197-tmp\") pod \"cluster-image-registry-operator-86c45576b9-9zfdd\" (UID: \"9c52d3a9-6263-4edb-9071-1d5dc43c7197\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-9zfdd" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.079445 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6gdsn\" (UniqueName: \"kubernetes.io/projected/9c52d3a9-6263-4edb-9071-1d5dc43c7197-kube-api-access-6gdsn\") pod \"cluster-image-registry-operator-86c45576b9-9zfdd\" (UID: \"9c52d3a9-6263-4edb-9071-1d5dc43c7197\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-9zfdd" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.079470 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/8ec6d719-5d37-4de1-9afe-3e01bfe8d640-signing-key\") pod \"service-ca-74545575db-p65gs\" (UID: \"8ec6d719-5d37-4de1-9afe-3e01bfe8d640\") " pod="openshift-service-ca/service-ca-74545575db-p65gs" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.079487 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ppts\" (UniqueName: \"kubernetes.io/projected/5fa95ab2-7bd1-49ba-bbe4-31209e7b7a65-kube-api-access-8ppts\") pod \"service-ca-operator-5b9c976747-qq2q6\" (UID: \"5fa95ab2-7bd1-49ba-bbe4-31209e7b7a65\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-qq2q6" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.079502 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/97853351-9834-428a-b4b9-399da76c66be-tmpfs\") pod \"olm-operator-5cdf44d969-dnqp8\" (UID: \"97853351-9834-428a-b4b9-399da76c66be\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-dnqp8" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.079517 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7qvb\" (UniqueName: \"kubernetes.io/projected/dff88ce7-473b-4e36-ae15-98b61242704c-kube-api-access-r7qvb\") pod \"catalog-operator-75ff9f647d-p7jcp\" (UID: \"dff88ce7-473b-4e36-ae15-98b61242704c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-p7jcp" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.079556 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9c52d3a9-6263-4edb-9071-1d5dc43c7197-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-9zfdd\" (UID: \"9c52d3a9-6263-4edb-9071-1d5dc43c7197\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-9zfdd" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.079572 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/9c52d3a9-6263-4edb-9071-1d5dc43c7197-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-9zfdd\" (UID: \"9c52d3a9-6263-4edb-9071-1d5dc43c7197\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-9zfdd" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.079597 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/11f14664-de3b-4cae-af94-5367cc3f2f4b-images\") pod \"machine-config-operator-67c9d58cbb-2rnqz\" (UID: \"11f14664-de3b-4cae-af94-5367cc3f2f4b\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-2rnqz" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.079615 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0d503143-f75b-40e6-b0e3-d1bd595a05ae-trusted-ca\") pod \"image-registry-66587d64c8-tlb84\" (UID: \"0d503143-f75b-40e6-b0e3-d1bd595a05ae\") " pod="openshift-image-registry/image-registry-66587d64c8-tlb84" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.079631 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3ba5de99-4b50-4027-b9c6-f1fbb61a7146-apiservice-cert\") pod \"packageserver-7d4fc7d867-bnqhm\" (UID: \"3ba5de99-4b50-4027-b9c6-f1fbb61a7146\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-bnqhm" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.079646 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cd261347-6a59-453c-836f-31c195e37417-config-volume\") pod \"dns-default-jcv4b\" (UID: \"cd261347-6a59-453c-836f-31c195e37417\") " pod="openshift-dns/dns-default-jcv4b" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.079700 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0d503143-f75b-40e6-b0e3-d1bd595a05ae-ca-trust-extracted\") pod \"image-registry-66587d64c8-tlb84\" (UID: \"0d503143-f75b-40e6-b0e3-d1bd595a05ae\") " pod="openshift-image-registry/image-registry-66587d64c8-tlb84" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.079717 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0d503143-f75b-40e6-b0e3-d1bd595a05ae-registry-certificates\") pod \"image-registry-66587d64c8-tlb84\" (UID: \"0d503143-f75b-40e6-b0e3-d1bd595a05ae\") " pod="openshift-image-registry/image-registry-66587d64c8-tlb84" Jan 21 00:11:15 crc kubenswrapper[5118]: E0121 00:11:15.079992 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 00:11:15.579975406 +0000 UTC m=+130.904222424 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.080059 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/9c52d3a9-6263-4edb-9071-1d5dc43c7197-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-9zfdd\" (UID: \"9c52d3a9-6263-4edb-9071-1d5dc43c7197\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-9zfdd" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.087674 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa4fb550-3d2a-4a19-8b93-c5e54e9b897a-service-ca-bundle\") pod \"router-default-68cf44c8b8-jrk8q\" (UID: \"aa4fb550-3d2a-4a19-8b93-c5e54e9b897a\") " pod="openshift-ingress/router-default-68cf44c8b8-jrk8q" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.087737 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2wrx\" (UniqueName: \"kubernetes.io/projected/a5be2081-6a80-4521-a1dd-2e332352f29c-kube-api-access-v2wrx\") pod \"kube-storage-version-migrator-operator-565b79b866-bg6nr\" (UID: \"a5be2081-6a80-4521-a1dd-2e332352f29c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-bg6nr" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.087795 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6f6f802d-add4-4e9d-bbbe-d3fd2eefd62a-console-oauth-config\") pod \"console-64d44f6ddf-xbtg4\" (UID: \"6f6f802d-add4-4e9d-bbbe-d3fd2eefd62a\") " pod="openshift-console/console-64d44f6ddf-xbtg4" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.087898 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c9bfbcd7-e344-4287-b4ac-aaf5f9a446e9-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-5wlqh\" (UID: \"c9bfbcd7-e344-4287-b4ac-aaf5f9a446e9\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-5wlqh" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.087936 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/946b14ca-c6a8-4020-bdb0-01d5ca69b536-cert\") pod \"ingress-canary-4vjlk\" (UID: \"946b14ca-c6a8-4020-bdb0-01d5ca69b536\") " pod="openshift-ingress-canary/ingress-canary-4vjlk" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.087962 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/cd261347-6a59-453c-836f-31c195e37417-tmp-dir\") pod \"dns-default-jcv4b\" (UID: \"cd261347-6a59-453c-836f-31c195e37417\") " pod="openshift-dns/dns-default-jcv4b" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.087992 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/26d1a4fa-1469-4128-bd56-c9a122b28068-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-g77tt\" (UID: \"26d1a4fa-1469-4128-bd56-c9a122b28068\") " pod="openshift-multus/cni-sysctl-allowlist-ds-g77tt" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.088068 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftz2k\" (UniqueName: \"kubernetes.io/projected/f5911eba-6406-44b4-868f-a47787c95fdf-kube-api-access-ftz2k\") pod \"migrator-866fcbc849-s4czg\" (UID: \"f5911eba-6406-44b4-868f-a47787c95fdf\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-s4czg" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.088096 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/8823ee71-944b-492d-8676-09a4f6e0103f-plugins-dir\") pod \"csi-hostpathplugin-t9lqw\" (UID: \"8823ee71-944b-492d-8676-09a4f6e0103f\") " pod="hostpath-provisioner/csi-hostpathplugin-t9lqw" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.088193 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0d503143-f75b-40e6-b0e3-d1bd595a05ae-installation-pull-secrets\") pod \"image-registry-66587d64c8-tlb84\" (UID: \"0d503143-f75b-40e6-b0e3-d1bd595a05ae\") " pod="openshift-image-registry/image-registry-66587d64c8-tlb84" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.088607 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/9c52d3a9-6263-4edb-9071-1d5dc43c7197-tmp\") pod \"cluster-image-registry-operator-86c45576b9-9zfdd\" (UID: \"9c52d3a9-6263-4edb-9071-1d5dc43c7197\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-9zfdd" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.088607 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0d503143-f75b-40e6-b0e3-d1bd595a05ae-registry-certificates\") pod \"image-registry-66587d64c8-tlb84\" (UID: \"0d503143-f75b-40e6-b0e3-d1bd595a05ae\") " pod="openshift-image-registry/image-registry-66587d64c8-tlb84" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.088691 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40b0f866-cb01-4820-863e-91d46a2fdda1-config\") pod \"etcd-operator-69b85846b6-cfnkd\" (UID: \"40b0f866-cb01-4820-863e-91d46a2fdda1\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-cfnkd" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.089396 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0d503143-f75b-40e6-b0e3-d1bd595a05ae-ca-trust-extracted\") pod \"image-registry-66587d64c8-tlb84\" (UID: \"0d503143-f75b-40e6-b0e3-d1bd595a05ae\") " pod="openshift-image-registry/image-registry-66587d64c8-tlb84" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.089525 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/485b5bf0-70af-4e4a-b766-d9e63a94395f-webhook-certs\") pod \"multus-admission-controller-69db94689b-wkjhb\" (UID: \"485b5bf0-70af-4e4a-b766-d9e63a94395f\") " pod="openshift-multus/multus-admission-controller-69db94689b-wkjhb" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.091052 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0d503143-f75b-40e6-b0e3-d1bd595a05ae-trusted-ca\") pod \"image-registry-66587d64c8-tlb84\" (UID: \"0d503143-f75b-40e6-b0e3-d1bd595a05ae\") " pod="openshift-image-registry/image-registry-66587d64c8-tlb84" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.092226 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/dff88ce7-473b-4e36-ae15-98b61242704c-tmpfs\") pod \"catalog-operator-75ff9f647d-p7jcp\" (UID: \"dff88ce7-473b-4e36-ae15-98b61242704c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-p7jcp" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.092317 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6kcg\" (UniqueName: \"kubernetes.io/projected/946b14ca-c6a8-4020-bdb0-01d5ca69b536-kube-api-access-g6kcg\") pod \"ingress-canary-4vjlk\" (UID: \"946b14ca-c6a8-4020-bdb0-01d5ca69b536\") " pod="openshift-ingress-canary/ingress-canary-4vjlk" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.092406 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htq2h\" (UniqueName: \"kubernetes.io/projected/59298912-50d6-49ab-82d9-625a7df65661-kube-api-access-htq2h\") pod \"machine-config-server-f7rf5\" (UID: \"59298912-50d6-49ab-82d9-625a7df65661\") " pod="openshift-machine-config-operator/machine-config-server-f7rf5" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.092430 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/dff88ce7-473b-4e36-ae15-98b61242704c-srv-cert\") pod \"catalog-operator-75ff9f647d-p7jcp\" (UID: \"dff88ce7-473b-4e36-ae15-98b61242704c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-p7jcp" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.092473 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/9c52d3a9-6263-4edb-9071-1d5dc43c7197-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-9zfdd\" (UID: \"9c52d3a9-6263-4edb-9071-1d5dc43c7197\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-9zfdd" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.095660 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40b0f866-cb01-4820-863e-91d46a2fdda1-config\") pod \"etcd-operator-69b85846b6-cfnkd\" (UID: \"40b0f866-cb01-4820-863e-91d46a2fdda1\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-cfnkd" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.096348 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvjxk\" (UniqueName: \"kubernetes.io/projected/11f14664-de3b-4cae-af94-5367cc3f2f4b-kube-api-access-rvjxk\") pod \"machine-config-operator-67c9d58cbb-2rnqz\" (UID: \"11f14664-de3b-4cae-af94-5367cc3f2f4b\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-2rnqz" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.096563 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6f6f802d-add4-4e9d-bbbe-d3fd2eefd62a-console-config\") pod \"console-64d44f6ddf-xbtg4\" (UID: \"6f6f802d-add4-4e9d-bbbe-d3fd2eefd62a\") " pod="openshift-console/console-64d44f6ddf-xbtg4" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.096651 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3ba5de99-4b50-4027-b9c6-f1fbb61a7146-webhook-cert\") pod \"packageserver-7d4fc7d867-bnqhm\" (UID: \"3ba5de99-4b50-4027-b9c6-f1fbb61a7146\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-bnqhm" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.096686 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c9bfbcd7-e344-4287-b4ac-aaf5f9a446e9-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-5wlqh\" (UID: \"c9bfbcd7-e344-4287-b4ac-aaf5f9a446e9\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-5wlqh" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.096789 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55qsh\" (UniqueName: \"kubernetes.io/projected/485b5bf0-70af-4e4a-b766-d9e63a94395f-kube-api-access-55qsh\") pod \"multus-admission-controller-69db94689b-wkjhb\" (UID: \"485b5bf0-70af-4e4a-b766-d9e63a94395f\") " pod="openshift-multus/multus-admission-controller-69db94689b-wkjhb" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.096877 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/3ba5de99-4b50-4027-b9c6-f1fbb61a7146-tmpfs\") pod \"packageserver-7d4fc7d867-bnqhm\" (UID: \"3ba5de99-4b50-4027-b9c6-f1fbb61a7146\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-bnqhm" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.097211 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65tf4\" (UniqueName: \"kubernetes.io/projected/aa4fb550-3d2a-4a19-8b93-c5e54e9b897a-kube-api-access-65tf4\") pod \"router-default-68cf44c8b8-jrk8q\" (UID: \"aa4fb550-3d2a-4a19-8b93-c5e54e9b897a\") " pod="openshift-ingress/router-default-68cf44c8b8-jrk8q" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.097248 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a5be2081-6a80-4521-a1dd-2e332352f29c-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-bg6nr\" (UID: \"a5be2081-6a80-4521-a1dd-2e332352f29c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-bg6nr" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.097289 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a651b3d8-5a1f-4da5-8d11-e6e3b1ef5d00-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-5r9pr\" (UID: \"a651b3d8-5a1f-4da5-8d11-e6e3b1ef5d00\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-5r9pr" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.097562 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/40b0f866-cb01-4820-863e-91d46a2fdda1-etcd-ca\") pod \"etcd-operator-69b85846b6-cfnkd\" (UID: \"40b0f866-cb01-4820-863e-91d46a2fdda1\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-cfnkd" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.097612 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wchdj\" (UniqueName: \"kubernetes.io/projected/26d1a4fa-1469-4128-bd56-c9a122b28068-kube-api-access-wchdj\") pod \"cni-sysctl-allowlist-ds-g77tt\" (UID: \"26d1a4fa-1469-4128-bd56-c9a122b28068\") " pod="openshift-multus/cni-sysctl-allowlist-ds-g77tt" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.097683 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cd261347-6a59-453c-836f-31c195e37417-metrics-tls\") pod \"dns-default-jcv4b\" (UID: \"cd261347-6a59-453c-836f-31c195e37417\") " pod="openshift-dns/dns-default-jcv4b" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.097736 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0d503143-f75b-40e6-b0e3-d1bd595a05ae-registry-tls\") pod \"image-registry-66587d64c8-tlb84\" (UID: \"0d503143-f75b-40e6-b0e3-d1bd595a05ae\") " pod="openshift-image-registry/image-registry-66587d64c8-tlb84" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.097764 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bs62m\" (UniqueName: \"kubernetes.io/projected/8ec6d719-5d37-4de1-9afe-3e01bfe8d640-kube-api-access-bs62m\") pod \"service-ca-74545575db-p65gs\" (UID: \"8ec6d719-5d37-4de1-9afe-3e01bfe8d640\") " pod="openshift-service-ca/service-ca-74545575db-p65gs" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.097820 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/8823ee71-944b-492d-8676-09a4f6e0103f-registration-dir\") pod \"csi-hostpathplugin-t9lqw\" (UID: \"8823ee71-944b-492d-8676-09a4f6e0103f\") " pod="hostpath-provisioner/csi-hostpathplugin-t9lqw" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.097942 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/8823ee71-944b-492d-8676-09a4f6e0103f-socket-dir\") pod \"csi-hostpathplugin-t9lqw\" (UID: \"8823ee71-944b-492d-8676-09a4f6e0103f\") " pod="hostpath-provisioner/csi-hostpathplugin-t9lqw" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.098035 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8mlf\" (UniqueName: \"kubernetes.io/projected/97853351-9834-428a-b4b9-399da76c66be-kube-api-access-w8mlf\") pod \"olm-operator-5cdf44d969-dnqp8\" (UID: \"97853351-9834-428a-b4b9-399da76c66be\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-dnqp8" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.098085 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a651b3d8-5a1f-4da5-8d11-e6e3b1ef5d00-tmp\") pod \"marketplace-operator-547dbd544d-5r9pr\" (UID: \"a651b3d8-5a1f-4da5-8d11-e6e3b1ef5d00\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-5r9pr" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.098992 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/40b0f866-cb01-4820-863e-91d46a2fdda1-etcd-ca\") pod \"etcd-operator-69b85846b6-cfnkd\" (UID: \"40b0f866-cb01-4820-863e-91d46a2fdda1\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-cfnkd" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.099485 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6f6f802d-add4-4e9d-bbbe-d3fd2eefd62a-console-config\") pod \"console-64d44f6ddf-xbtg4\" (UID: \"6f6f802d-add4-4e9d-bbbe-d3fd2eefd62a\") " pod="openshift-console/console-64d44f6ddf-xbtg4" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.101312 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-t6jsw\" (UniqueName: \"kubernetes.io/projected/0d503143-f75b-40e6-b0e3-d1bd595a05ae-kube-api-access-t6jsw\") pod \"image-registry-66587d64c8-tlb84\" (UID: \"0d503143-f75b-40e6-b0e3-d1bd595a05ae\") " pod="openshift-image-registry/image-registry-66587d64c8-tlb84" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.102006 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/40b0f866-cb01-4820-863e-91d46a2fdda1-serving-cert\") pod \"etcd-operator-69b85846b6-cfnkd\" (UID: \"40b0f866-cb01-4820-863e-91d46a2fdda1\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-cfnkd" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.102173 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/8ec6d719-5d37-4de1-9afe-3e01bfe8d640-signing-cabundle\") pod \"service-ca-74545575db-p65gs\" (UID: \"8ec6d719-5d37-4de1-9afe-3e01bfe8d640\") " pod="openshift-service-ca/service-ca-74545575db-p65gs" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.103009 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tlb84\" (UID: \"0d503143-f75b-40e6-b0e3-d1bd595a05ae\") " pod="openshift-image-registry/image-registry-66587d64c8-tlb84" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.103281 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/59298912-50d6-49ab-82d9-625a7df65661-certs\") pod \"machine-config-server-f7rf5\" (UID: \"59298912-50d6-49ab-82d9-625a7df65661\") " pod="openshift-machine-config-operator/machine-config-server-f7rf5" Jan 21 00:11:15 crc kubenswrapper[5118]: E0121 00:11:15.103714 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 00:11:15.603696726 +0000 UTC m=+130.927943744 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tlb84" (UID: "0d503143-f75b-40e6-b0e3-d1bd595a05ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.106571 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0d503143-f75b-40e6-b0e3-d1bd595a05ae-registry-tls\") pod \"image-registry-66587d64c8-tlb84\" (UID: \"0d503143-f75b-40e6-b0e3-d1bd595a05ae\") " pod="openshift-image-registry/image-registry-66587d64c8-tlb84" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.106834 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9c52d3a9-6263-4edb-9071-1d5dc43c7197-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-9zfdd\" (UID: \"9c52d3a9-6263-4edb-9071-1d5dc43c7197\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-9zfdd" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.107138 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/11f14664-de3b-4cae-af94-5367cc3f2f4b-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-2rnqz\" (UID: \"11f14664-de3b-4cae-af94-5367cc3f2f4b\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-2rnqz" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.107261 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/26d1a4fa-1469-4128-bd56-c9a122b28068-ready\") pod \"cni-sysctl-allowlist-ds-g77tt\" (UID: \"26d1a4fa-1469-4128-bd56-c9a122b28068\") " pod="openshift-multus/cni-sysctl-allowlist-ds-g77tt" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.108324 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/26d1a4fa-1469-4128-bd56-c9a122b28068-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-g77tt\" (UID: \"26d1a4fa-1469-4128-bd56-c9a122b28068\") " pod="openshift-multus/cni-sysctl-allowlist-ds-g77tt" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.108416 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-96762\" (UniqueName: \"kubernetes.io/projected/6f6f802d-add4-4e9d-bbbe-d3fd2eefd62a-kube-api-access-96762\") pod \"console-64d44f6ddf-xbtg4\" (UID: \"6f6f802d-add4-4e9d-bbbe-d3fd2eefd62a\") " pod="openshift-console/console-64d44f6ddf-xbtg4" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.108473 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hn28v\" (UniqueName: \"kubernetes.io/projected/7c693d48-122b-44a7-8257-f4f312e980aa-kube-api-access-hn28v\") pod \"collect-profiles-29482560-6bcnb\" (UID: \"7c693d48-122b-44a7-8257-f4f312e980aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482560-6bcnb" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.108509 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4jzq\" (UniqueName: \"kubernetes.io/projected/8823ee71-944b-492d-8676-09a4f6e0103f-kube-api-access-c4jzq\") pod \"csi-hostpathplugin-t9lqw\" (UID: \"8823ee71-944b-492d-8676-09a4f6e0103f\") " pod="hostpath-provisioner/csi-hostpathplugin-t9lqw" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.108598 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7c693d48-122b-44a7-8257-f4f312e980aa-secret-volume\") pod \"collect-profiles-29482560-6bcnb\" (UID: \"7c693d48-122b-44a7-8257-f4f312e980aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482560-6bcnb" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.109334 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dr85\" (UniqueName: \"kubernetes.io/projected/3ba5de99-4b50-4027-b9c6-f1fbb61a7146-kube-api-access-7dr85\") pod \"packageserver-7d4fc7d867-bnqhm\" (UID: \"3ba5de99-4b50-4027-b9c6-f1fbb61a7146\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-bnqhm" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.109833 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6f6f802d-add4-4e9d-bbbe-d3fd2eefd62a-service-ca\") pod \"console-64d44f6ddf-xbtg4\" (UID: \"6f6f802d-add4-4e9d-bbbe-d3fd2eefd62a\") " pod="openshift-console/console-64d44f6ddf-xbtg4" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.110984 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6f6f802d-add4-4e9d-bbbe-d3fd2eefd62a-service-ca\") pod \"console-64d44f6ddf-xbtg4\" (UID: \"6f6f802d-add4-4e9d-bbbe-d3fd2eefd62a\") " pod="openshift-console/console-64d44f6ddf-xbtg4" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.111104 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0d503143-f75b-40e6-b0e3-d1bd595a05ae-bound-sa-token\") pod \"image-registry-66587d64c8-tlb84\" (UID: \"0d503143-f75b-40e6-b0e3-d1bd595a05ae\") " pod="openshift-image-registry/image-registry-66587d64c8-tlb84" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.111138 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/8823ee71-944b-492d-8676-09a4f6e0103f-mountpoint-dir\") pod \"csi-hostpathplugin-t9lqw\" (UID: \"8823ee71-944b-492d-8676-09a4f6e0103f\") " pod="hostpath-provisioner/csi-hostpathplugin-t9lqw" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.111393 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9c52d3a9-6263-4edb-9071-1d5dc43c7197-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-9zfdd\" (UID: \"9c52d3a9-6263-4edb-9071-1d5dc43c7197\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-9zfdd" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.111411 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/8823ee71-944b-492d-8676-09a4f6e0103f-csi-data-dir\") pod \"csi-hostpathplugin-t9lqw\" (UID: \"8823ee71-944b-492d-8676-09a4f6e0103f\") " pod="hostpath-provisioner/csi-hostpathplugin-t9lqw" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.111555 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/97853351-9834-428a-b4b9-399da76c66be-srv-cert\") pod \"olm-operator-5cdf44d969-dnqp8\" (UID: \"97853351-9834-428a-b4b9-399da76c66be\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-dnqp8" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.111749 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6f6f802d-add4-4e9d-bbbe-d3fd2eefd62a-console-serving-cert\") pod \"console-64d44f6ddf-xbtg4\" (UID: \"6f6f802d-add4-4e9d-bbbe-d3fd2eefd62a\") " pod="openshift-console/console-64d44f6ddf-xbtg4" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.111813 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zc8cn\" (UniqueName: \"kubernetes.io/projected/a6c1fff3-93b2-4ea5-b74f-8f7715a04e5c-kube-api-access-zc8cn\") pod \"package-server-manager-77f986bd66-8zvdp\" (UID: \"a6c1fff3-93b2-4ea5-b74f-8f7715a04e5c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-8zvdp" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.111904 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45tbw\" (UniqueName: \"kubernetes.io/projected/271e0654-9d86-4ec1-8c25-d345a8a1eb0a-kube-api-access-45tbw\") pod \"control-plane-machine-set-operator-75ffdb6fcd-nkx6r\" (UID: \"271e0654-9d86-4ec1-8c25-d345a8a1eb0a\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-nkx6r" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.111967 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6f6f802d-add4-4e9d-bbbe-d3fd2eefd62a-oauth-serving-cert\") pod \"console-64d44f6ddf-xbtg4\" (UID: \"6f6f802d-add4-4e9d-bbbe-d3fd2eefd62a\") " pod="openshift-console/console-64d44f6ddf-xbtg4" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.112406 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xg6xt\" (UniqueName: \"kubernetes.io/projected/a651b3d8-5a1f-4da5-8d11-e6e3b1ef5d00-kube-api-access-xg6xt\") pod \"marketplace-operator-547dbd544d-5r9pr\" (UID: \"a651b3d8-5a1f-4da5-8d11-e6e3b1ef5d00\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-5r9pr" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.112482 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/dff88ce7-473b-4e36-ae15-98b61242704c-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-p7jcp\" (UID: \"dff88ce7-473b-4e36-ae15-98b61242704c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-p7jcp" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.112684 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/11f14664-de3b-4cae-af94-5367cc3f2f4b-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-2rnqz\" (UID: \"11f14664-de3b-4cae-af94-5367cc3f2f4b\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-2rnqz" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.112720 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a6c1fff3-93b2-4ea5-b74f-8f7715a04e5c-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-8zvdp\" (UID: \"a6c1fff3-93b2-4ea5-b74f-8f7715a04e5c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-8zvdp" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.112783 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/aa4fb550-3d2a-4a19-8b93-c5e54e9b897a-stats-auth\") pod \"router-default-68cf44c8b8-jrk8q\" (UID: \"aa4fb550-3d2a-4a19-8b93-c5e54e9b897a\") " pod="openshift-ingress/router-default-68cf44c8b8-jrk8q" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.112814 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5fa95ab2-7bd1-49ba-bbe4-31209e7b7a65-serving-cert\") pod \"service-ca-operator-5b9c976747-qq2q6\" (UID: \"5fa95ab2-7bd1-49ba-bbe4-31209e7b7a65\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-qq2q6" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.112839 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/59298912-50d6-49ab-82d9-625a7df65661-node-bootstrap-token\") pod \"machine-config-server-f7rf5\" (UID: \"59298912-50d6-49ab-82d9-625a7df65661\") " pod="openshift-machine-config-operator/machine-config-server-f7rf5" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.113048 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6f6f802d-add4-4e9d-bbbe-d3fd2eefd62a-oauth-serving-cert\") pod \"console-64d44f6ddf-xbtg4\" (UID: \"6f6f802d-add4-4e9d-bbbe-d3fd2eefd62a\") " pod="openshift-console/console-64d44f6ddf-xbtg4" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.113418 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a651b3d8-5a1f-4da5-8d11-e6e3b1ef5d00-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-5r9pr\" (UID: \"a651b3d8-5a1f-4da5-8d11-e6e3b1ef5d00\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-5r9pr" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.113755 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/40b0f866-cb01-4820-863e-91d46a2fdda1-etcd-service-ca\") pod \"etcd-operator-69b85846b6-cfnkd\" (UID: \"40b0f866-cb01-4820-863e-91d46a2fdda1\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-cfnkd" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.113824 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/40b0f866-cb01-4820-863e-91d46a2fdda1-tmp-dir\") pod \"etcd-operator-69b85846b6-cfnkd\" (UID: \"40b0f866-cb01-4820-863e-91d46a2fdda1\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-cfnkd" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.113870 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6f6f802d-add4-4e9d-bbbe-d3fd2eefd62a-trusted-ca-bundle\") pod \"console-64d44f6ddf-xbtg4\" (UID: \"6f6f802d-add4-4e9d-bbbe-d3fd2eefd62a\") " pod="openshift-console/console-64d44f6ddf-xbtg4" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.114169 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/40b0f866-cb01-4820-863e-91d46a2fdda1-tmp-dir\") pod \"etcd-operator-69b85846b6-cfnkd\" (UID: \"40b0f866-cb01-4820-863e-91d46a2fdda1\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-cfnkd" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.114235 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlc2x\" (UniqueName: \"kubernetes.io/projected/c9bfbcd7-e344-4287-b4ac-aaf5f9a446e9-kube-api-access-qlc2x\") pod \"machine-config-controller-f9cdd68f7-5wlqh\" (UID: \"c9bfbcd7-e344-4287-b4ac-aaf5f9a446e9\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-5wlqh" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.114517 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/40b0f866-cb01-4820-863e-91d46a2fdda1-etcd-client\") pod \"etcd-operator-69b85846b6-cfnkd\" (UID: \"40b0f866-cb01-4820-863e-91d46a2fdda1\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-cfnkd" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.114552 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/97853351-9834-428a-b4b9-399da76c66be-profile-collector-cert\") pod \"olm-operator-5cdf44d969-dnqp8\" (UID: \"97853351-9834-428a-b4b9-399da76c66be\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-dnqp8" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.114573 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/aa4fb550-3d2a-4a19-8b93-c5e54e9b897a-default-certificate\") pod \"router-default-68cf44c8b8-jrk8q\" (UID: \"aa4fb550-3d2a-4a19-8b93-c5e54e9b897a\") " pod="openshift-ingress/router-default-68cf44c8b8-jrk8q" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.115275 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6f6f802d-add4-4e9d-bbbe-d3fd2eefd62a-trusted-ca-bundle\") pod \"console-64d44f6ddf-xbtg4\" (UID: \"6f6f802d-add4-4e9d-bbbe-d3fd2eefd62a\") " pod="openshift-console/console-64d44f6ddf-xbtg4" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.115715 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/40b0f866-cb01-4820-863e-91d46a2fdda1-etcd-service-ca\") pod \"etcd-operator-69b85846b6-cfnkd\" (UID: \"40b0f866-cb01-4820-863e-91d46a2fdda1\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-cfnkd" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.116176 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-5777786469-jdqmz" Jan 21 00:11:15 crc kubenswrapper[5118]: W0121 00:11:15.124196 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod609947aa_6e7c_439b_a7a7_7d06f8ab4f1c.slice/crio-5ac1d3f7ab11b457becf63ceb8e436082046d9cce8f8f0c0c870ff98bfc81a60 WatchSource:0}: Error finding container 5ac1d3f7ab11b457becf63ceb8e436082046d9cce8f8f0c0c870ff98bfc81a60: Status 404 returned error can't find the container with id 5ac1d3f7ab11b457becf63ceb8e436082046d9cce8f8f0c0c870ff98bfc81a60 Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.124799 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6f6f802d-add4-4e9d-bbbe-d3fd2eefd62a-console-oauth-config\") pod \"console-64d44f6ddf-xbtg4\" (UID: \"6f6f802d-add4-4e9d-bbbe-d3fd2eefd62a\") " pod="openshift-console/console-64d44f6ddf-xbtg4" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.125150 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0d503143-f75b-40e6-b0e3-d1bd595a05ae-installation-pull-secrets\") pod \"image-registry-66587d64c8-tlb84\" (UID: \"0d503143-f75b-40e6-b0e3-d1bd595a05ae\") " pod="openshift-image-registry/image-registry-66587d64c8-tlb84" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.132445 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/40b0f866-cb01-4820-863e-91d46a2fdda1-serving-cert\") pod \"etcd-operator-69b85846b6-cfnkd\" (UID: \"40b0f866-cb01-4820-863e-91d46a2fdda1\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-cfnkd" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.133042 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/9c52d3a9-6263-4edb-9071-1d5dc43c7197-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-9zfdd\" (UID: \"9c52d3a9-6263-4edb-9071-1d5dc43c7197\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-9zfdd" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.136410 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9c52d3a9-6263-4edb-9071-1d5dc43c7197-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-9zfdd\" (UID: \"9c52d3a9-6263-4edb-9071-1d5dc43c7197\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-9zfdd" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.138916 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-6d476"] Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.142314 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/40b0f866-cb01-4820-863e-91d46a2fdda1-etcd-client\") pod \"etcd-operator-69b85846b6-cfnkd\" (UID: \"40b0f866-cb01-4820-863e-91d46a2fdda1\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-cfnkd" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.146128 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gdsn\" (UniqueName: \"kubernetes.io/projected/9c52d3a9-6263-4edb-9071-1d5dc43c7197-kube-api-access-6gdsn\") pod \"cluster-image-registry-operator-86c45576b9-9zfdd\" (UID: \"9c52d3a9-6263-4edb-9071-1d5dc43c7197\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-9zfdd" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.149077 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6f6f802d-add4-4e9d-bbbe-d3fd2eefd62a-console-serving-cert\") pod \"console-64d44f6ddf-xbtg4\" (UID: \"6f6f802d-add4-4e9d-bbbe-d3fd2eefd62a\") " pod="openshift-console/console-64d44f6ddf-xbtg4" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.169672 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-9zfdd" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.182851 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-8t6fc"] Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.190451 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tt55\" (UniqueName: \"kubernetes.io/projected/40b0f866-cb01-4820-863e-91d46a2fdda1-kube-api-access-9tt55\") pod \"etcd-operator-69b85846b6-cfnkd\" (UID: \"40b0f866-cb01-4820-863e-91d46a2fdda1\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-cfnkd" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.194699 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-69b85846b6-cfnkd" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.216193 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.216354 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/aa4fb550-3d2a-4a19-8b93-c5e54e9b897a-stats-auth\") pod \"router-default-68cf44c8b8-jrk8q\" (UID: \"aa4fb550-3d2a-4a19-8b93-c5e54e9b897a\") " pod="openshift-ingress/router-default-68cf44c8b8-jrk8q" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.216376 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5fa95ab2-7bd1-49ba-bbe4-31209e7b7a65-serving-cert\") pod \"service-ca-operator-5b9c976747-qq2q6\" (UID: \"5fa95ab2-7bd1-49ba-bbe4-31209e7b7a65\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-qq2q6" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.216393 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/59298912-50d6-49ab-82d9-625a7df65661-node-bootstrap-token\") pod \"machine-config-server-f7rf5\" (UID: \"59298912-50d6-49ab-82d9-625a7df65661\") " pod="openshift-machine-config-operator/machine-config-server-f7rf5" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.216428 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a651b3d8-5a1f-4da5-8d11-e6e3b1ef5d00-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-5r9pr\" (UID: \"a651b3d8-5a1f-4da5-8d11-e6e3b1ef5d00\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-5r9pr" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.216452 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qlc2x\" (UniqueName: \"kubernetes.io/projected/c9bfbcd7-e344-4287-b4ac-aaf5f9a446e9-kube-api-access-qlc2x\") pod \"machine-config-controller-f9cdd68f7-5wlqh\" (UID: \"c9bfbcd7-e344-4287-b4ac-aaf5f9a446e9\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-5wlqh" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.216471 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/97853351-9834-428a-b4b9-399da76c66be-profile-collector-cert\") pod \"olm-operator-5cdf44d969-dnqp8\" (UID: \"97853351-9834-428a-b4b9-399da76c66be\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-dnqp8" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.217220 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/aa4fb550-3d2a-4a19-8b93-c5e54e9b897a-default-certificate\") pod \"router-default-68cf44c8b8-jrk8q\" (UID: \"aa4fb550-3d2a-4a19-8b93-c5e54e9b897a\") " pod="openshift-ingress/router-default-68cf44c8b8-jrk8q" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.217242 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/271e0654-9d86-4ec1-8c25-d345a8a1eb0a-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-nkx6r\" (UID: \"271e0654-9d86-4ec1-8c25-d345a8a1eb0a\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-nkx6r" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.217278 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/aa4fb550-3d2a-4a19-8b93-c5e54e9b897a-metrics-certs\") pod \"router-default-68cf44c8b8-jrk8q\" (UID: \"aa4fb550-3d2a-4a19-8b93-c5e54e9b897a\") " pod="openshift-ingress/router-default-68cf44c8b8-jrk8q" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.217296 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7c693d48-122b-44a7-8257-f4f312e980aa-config-volume\") pod \"collect-profiles-29482560-6bcnb\" (UID: \"7c693d48-122b-44a7-8257-f4f312e980aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482560-6bcnb" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.217311 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wltq5\" (UniqueName: \"kubernetes.io/projected/cd261347-6a59-453c-836f-31c195e37417-kube-api-access-wltq5\") pod \"dns-default-jcv4b\" (UID: \"cd261347-6a59-453c-836f-31c195e37417\") " pod="openshift-dns/dns-default-jcv4b" Jan 21 00:11:15 crc kubenswrapper[5118]: E0121 00:11:15.218600 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 00:11:15.718573536 +0000 UTC m=+131.042820554 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.223671 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7c693d48-122b-44a7-8257-f4f312e980aa-config-volume\") pod \"collect-profiles-29482560-6bcnb\" (UID: \"7c693d48-122b-44a7-8257-f4f312e980aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482560-6bcnb" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.224301 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6jsw\" (UniqueName: \"kubernetes.io/projected/0d503143-f75b-40e6-b0e3-d1bd595a05ae-kube-api-access-t6jsw\") pod \"image-registry-66587d64c8-tlb84\" (UID: \"0d503143-f75b-40e6-b0e3-d1bd595a05ae\") " pod="openshift-image-registry/image-registry-66587d64c8-tlb84" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.232264 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fa95ab2-7bd1-49ba-bbe4-31209e7b7a65-config\") pod \"service-ca-operator-5b9c976747-qq2q6\" (UID: \"5fa95ab2-7bd1-49ba-bbe4-31209e7b7a65\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-qq2q6" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.232316 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5be2081-6a80-4521-a1dd-2e332352f29c-config\") pod \"kube-storage-version-migrator-operator-565b79b866-bg6nr\" (UID: \"a5be2081-6a80-4521-a1dd-2e332352f29c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-bg6nr" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.232353 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/8ec6d719-5d37-4de1-9afe-3e01bfe8d640-signing-key\") pod \"service-ca-74545575db-p65gs\" (UID: \"8ec6d719-5d37-4de1-9afe-3e01bfe8d640\") " pod="openshift-service-ca/service-ca-74545575db-p65gs" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.232370 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8ppts\" (UniqueName: \"kubernetes.io/projected/5fa95ab2-7bd1-49ba-bbe4-31209e7b7a65-kube-api-access-8ppts\") pod \"service-ca-operator-5b9c976747-qq2q6\" (UID: \"5fa95ab2-7bd1-49ba-bbe4-31209e7b7a65\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-qq2q6" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.232387 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/97853351-9834-428a-b4b9-399da76c66be-tmpfs\") pod \"olm-operator-5cdf44d969-dnqp8\" (UID: \"97853351-9834-428a-b4b9-399da76c66be\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-dnqp8" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.232428 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-r7qvb\" (UniqueName: \"kubernetes.io/projected/dff88ce7-473b-4e36-ae15-98b61242704c-kube-api-access-r7qvb\") pod \"catalog-operator-75ff9f647d-p7jcp\" (UID: \"dff88ce7-473b-4e36-ae15-98b61242704c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-p7jcp" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.232462 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/11f14664-de3b-4cae-af94-5367cc3f2f4b-images\") pod \"machine-config-operator-67c9d58cbb-2rnqz\" (UID: \"11f14664-de3b-4cae-af94-5367cc3f2f4b\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-2rnqz" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.232481 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3ba5de99-4b50-4027-b9c6-f1fbb61a7146-apiservice-cert\") pod \"packageserver-7d4fc7d867-bnqhm\" (UID: \"3ba5de99-4b50-4027-b9c6-f1fbb61a7146\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-bnqhm" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.232498 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cd261347-6a59-453c-836f-31c195e37417-config-volume\") pod \"dns-default-jcv4b\" (UID: \"cd261347-6a59-453c-836f-31c195e37417\") " pod="openshift-dns/dns-default-jcv4b" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.232531 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa4fb550-3d2a-4a19-8b93-c5e54e9b897a-service-ca-bundle\") pod \"router-default-68cf44c8b8-jrk8q\" (UID: \"aa4fb550-3d2a-4a19-8b93-c5e54e9b897a\") " pod="openshift-ingress/router-default-68cf44c8b8-jrk8q" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.232546 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-v2wrx\" (UniqueName: \"kubernetes.io/projected/a5be2081-6a80-4521-a1dd-2e332352f29c-kube-api-access-v2wrx\") pod \"kube-storage-version-migrator-operator-565b79b866-bg6nr\" (UID: \"a5be2081-6a80-4521-a1dd-2e332352f29c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-bg6nr" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.232571 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c9bfbcd7-e344-4287-b4ac-aaf5f9a446e9-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-5wlqh\" (UID: \"c9bfbcd7-e344-4287-b4ac-aaf5f9a446e9\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-5wlqh" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.232584 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/946b14ca-c6a8-4020-bdb0-01d5ca69b536-cert\") pod \"ingress-canary-4vjlk\" (UID: \"946b14ca-c6a8-4020-bdb0-01d5ca69b536\") " pod="openshift-ingress-canary/ingress-canary-4vjlk" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.232598 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/cd261347-6a59-453c-836f-31c195e37417-tmp-dir\") pod \"dns-default-jcv4b\" (UID: \"cd261347-6a59-453c-836f-31c195e37417\") " pod="openshift-dns/dns-default-jcv4b" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.232614 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/26d1a4fa-1469-4128-bd56-c9a122b28068-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-g77tt\" (UID: \"26d1a4fa-1469-4128-bd56-c9a122b28068\") " pod="openshift-multus/cni-sysctl-allowlist-ds-g77tt" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.232633 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ftz2k\" (UniqueName: \"kubernetes.io/projected/f5911eba-6406-44b4-868f-a47787c95fdf-kube-api-access-ftz2k\") pod \"migrator-866fcbc849-s4czg\" (UID: \"f5911eba-6406-44b4-868f-a47787c95fdf\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-s4czg" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.232652 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/8823ee71-944b-492d-8676-09a4f6e0103f-plugins-dir\") pod \"csi-hostpathplugin-t9lqw\" (UID: \"8823ee71-944b-492d-8676-09a4f6e0103f\") " pod="hostpath-provisioner/csi-hostpathplugin-t9lqw" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.232687 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/485b5bf0-70af-4e4a-b766-d9e63a94395f-webhook-certs\") pod \"multus-admission-controller-69db94689b-wkjhb\" (UID: \"485b5bf0-70af-4e4a-b766-d9e63a94395f\") " pod="openshift-multus/multus-admission-controller-69db94689b-wkjhb" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.232702 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/dff88ce7-473b-4e36-ae15-98b61242704c-tmpfs\") pod \"catalog-operator-75ff9f647d-p7jcp\" (UID: \"dff88ce7-473b-4e36-ae15-98b61242704c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-p7jcp" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.232722 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-g6kcg\" (UniqueName: \"kubernetes.io/projected/946b14ca-c6a8-4020-bdb0-01d5ca69b536-kube-api-access-g6kcg\") pod \"ingress-canary-4vjlk\" (UID: \"946b14ca-c6a8-4020-bdb0-01d5ca69b536\") " pod="openshift-ingress-canary/ingress-canary-4vjlk" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.232741 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-htq2h\" (UniqueName: \"kubernetes.io/projected/59298912-50d6-49ab-82d9-625a7df65661-kube-api-access-htq2h\") pod \"machine-config-server-f7rf5\" (UID: \"59298912-50d6-49ab-82d9-625a7df65661\") " pod="openshift-machine-config-operator/machine-config-server-f7rf5" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.232756 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/dff88ce7-473b-4e36-ae15-98b61242704c-srv-cert\") pod \"catalog-operator-75ff9f647d-p7jcp\" (UID: \"dff88ce7-473b-4e36-ae15-98b61242704c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-p7jcp" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.232784 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rvjxk\" (UniqueName: \"kubernetes.io/projected/11f14664-de3b-4cae-af94-5367cc3f2f4b-kube-api-access-rvjxk\") pod \"machine-config-operator-67c9d58cbb-2rnqz\" (UID: \"11f14664-de3b-4cae-af94-5367cc3f2f4b\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-2rnqz" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.232803 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3ba5de99-4b50-4027-b9c6-f1fbb61a7146-webhook-cert\") pod \"packageserver-7d4fc7d867-bnqhm\" (UID: \"3ba5de99-4b50-4027-b9c6-f1fbb61a7146\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-bnqhm" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.232821 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c9bfbcd7-e344-4287-b4ac-aaf5f9a446e9-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-5wlqh\" (UID: \"c9bfbcd7-e344-4287-b4ac-aaf5f9a446e9\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-5wlqh" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.232838 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-55qsh\" (UniqueName: \"kubernetes.io/projected/485b5bf0-70af-4e4a-b766-d9e63a94395f-kube-api-access-55qsh\") pod \"multus-admission-controller-69db94689b-wkjhb\" (UID: \"485b5bf0-70af-4e4a-b766-d9e63a94395f\") " pod="openshift-multus/multus-admission-controller-69db94689b-wkjhb" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.232853 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/3ba5de99-4b50-4027-b9c6-f1fbb61a7146-tmpfs\") pod \"packageserver-7d4fc7d867-bnqhm\" (UID: \"3ba5de99-4b50-4027-b9c6-f1fbb61a7146\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-bnqhm" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.232874 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-65tf4\" (UniqueName: \"kubernetes.io/projected/aa4fb550-3d2a-4a19-8b93-c5e54e9b897a-kube-api-access-65tf4\") pod \"router-default-68cf44c8b8-jrk8q\" (UID: \"aa4fb550-3d2a-4a19-8b93-c5e54e9b897a\") " pod="openshift-ingress/router-default-68cf44c8b8-jrk8q" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.232890 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a5be2081-6a80-4521-a1dd-2e332352f29c-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-bg6nr\" (UID: \"a5be2081-6a80-4521-a1dd-2e332352f29c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-bg6nr" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.232904 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a651b3d8-5a1f-4da5-8d11-e6e3b1ef5d00-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-5r9pr\" (UID: \"a651b3d8-5a1f-4da5-8d11-e6e3b1ef5d00\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-5r9pr" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.232927 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wchdj\" (UniqueName: \"kubernetes.io/projected/26d1a4fa-1469-4128-bd56-c9a122b28068-kube-api-access-wchdj\") pod \"cni-sysctl-allowlist-ds-g77tt\" (UID: \"26d1a4fa-1469-4128-bd56-c9a122b28068\") " pod="openshift-multus/cni-sysctl-allowlist-ds-g77tt" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.232951 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cd261347-6a59-453c-836f-31c195e37417-metrics-tls\") pod \"dns-default-jcv4b\" (UID: \"cd261347-6a59-453c-836f-31c195e37417\") " pod="openshift-dns/dns-default-jcv4b" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.232986 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bs62m\" (UniqueName: \"kubernetes.io/projected/8ec6d719-5d37-4de1-9afe-3e01bfe8d640-kube-api-access-bs62m\") pod \"service-ca-74545575db-p65gs\" (UID: \"8ec6d719-5d37-4de1-9afe-3e01bfe8d640\") " pod="openshift-service-ca/service-ca-74545575db-p65gs" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.233005 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/8823ee71-944b-492d-8676-09a4f6e0103f-registration-dir\") pod \"csi-hostpathplugin-t9lqw\" (UID: \"8823ee71-944b-492d-8676-09a4f6e0103f\") " pod="hostpath-provisioner/csi-hostpathplugin-t9lqw" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.233029 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/8823ee71-944b-492d-8676-09a4f6e0103f-socket-dir\") pod \"csi-hostpathplugin-t9lqw\" (UID: \"8823ee71-944b-492d-8676-09a4f6e0103f\") " pod="hostpath-provisioner/csi-hostpathplugin-t9lqw" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.233045 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-w8mlf\" (UniqueName: \"kubernetes.io/projected/97853351-9834-428a-b4b9-399da76c66be-kube-api-access-w8mlf\") pod \"olm-operator-5cdf44d969-dnqp8\" (UID: \"97853351-9834-428a-b4b9-399da76c66be\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-dnqp8" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.233065 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a651b3d8-5a1f-4da5-8d11-e6e3b1ef5d00-tmp\") pod \"marketplace-operator-547dbd544d-5r9pr\" (UID: \"a651b3d8-5a1f-4da5-8d11-e6e3b1ef5d00\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-5r9pr" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.233094 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/8ec6d719-5d37-4de1-9afe-3e01bfe8d640-signing-cabundle\") pod \"service-ca-74545575db-p65gs\" (UID: \"8ec6d719-5d37-4de1-9afe-3e01bfe8d640\") " pod="openshift-service-ca/service-ca-74545575db-p65gs" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.233116 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tlb84\" (UID: \"0d503143-f75b-40e6-b0e3-d1bd595a05ae\") " pod="openshift-image-registry/image-registry-66587d64c8-tlb84" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.233131 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/59298912-50d6-49ab-82d9-625a7df65661-certs\") pod \"machine-config-server-f7rf5\" (UID: \"59298912-50d6-49ab-82d9-625a7df65661\") " pod="openshift-machine-config-operator/machine-config-server-f7rf5" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.233172 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/11f14664-de3b-4cae-af94-5367cc3f2f4b-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-2rnqz\" (UID: \"11f14664-de3b-4cae-af94-5367cc3f2f4b\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-2rnqz" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.233190 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/26d1a4fa-1469-4128-bd56-c9a122b28068-ready\") pod \"cni-sysctl-allowlist-ds-g77tt\" (UID: \"26d1a4fa-1469-4128-bd56-c9a122b28068\") " pod="openshift-multus/cni-sysctl-allowlist-ds-g77tt" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.233215 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/26d1a4fa-1469-4128-bd56-c9a122b28068-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-g77tt\" (UID: \"26d1a4fa-1469-4128-bd56-c9a122b28068\") " pod="openshift-multus/cni-sysctl-allowlist-ds-g77tt" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.233232 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hn28v\" (UniqueName: \"kubernetes.io/projected/7c693d48-122b-44a7-8257-f4f312e980aa-kube-api-access-hn28v\") pod \"collect-profiles-29482560-6bcnb\" (UID: \"7c693d48-122b-44a7-8257-f4f312e980aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482560-6bcnb" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.233248 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-c4jzq\" (UniqueName: \"kubernetes.io/projected/8823ee71-944b-492d-8676-09a4f6e0103f-kube-api-access-c4jzq\") pod \"csi-hostpathplugin-t9lqw\" (UID: \"8823ee71-944b-492d-8676-09a4f6e0103f\") " pod="hostpath-provisioner/csi-hostpathplugin-t9lqw" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.233273 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7c693d48-122b-44a7-8257-f4f312e980aa-secret-volume\") pod \"collect-profiles-29482560-6bcnb\" (UID: \"7c693d48-122b-44a7-8257-f4f312e980aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482560-6bcnb" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.233293 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7dr85\" (UniqueName: \"kubernetes.io/projected/3ba5de99-4b50-4027-b9c6-f1fbb61a7146-kube-api-access-7dr85\") pod \"packageserver-7d4fc7d867-bnqhm\" (UID: \"3ba5de99-4b50-4027-b9c6-f1fbb61a7146\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-bnqhm" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.233321 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/8823ee71-944b-492d-8676-09a4f6e0103f-mountpoint-dir\") pod \"csi-hostpathplugin-t9lqw\" (UID: \"8823ee71-944b-492d-8676-09a4f6e0103f\") " pod="hostpath-provisioner/csi-hostpathplugin-t9lqw" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.233336 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/8823ee71-944b-492d-8676-09a4f6e0103f-csi-data-dir\") pod \"csi-hostpathplugin-t9lqw\" (UID: \"8823ee71-944b-492d-8676-09a4f6e0103f\") " pod="hostpath-provisioner/csi-hostpathplugin-t9lqw" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.233351 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/97853351-9834-428a-b4b9-399da76c66be-srv-cert\") pod \"olm-operator-5cdf44d969-dnqp8\" (UID: \"97853351-9834-428a-b4b9-399da76c66be\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-dnqp8" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.233376 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zc8cn\" (UniqueName: \"kubernetes.io/projected/a6c1fff3-93b2-4ea5-b74f-8f7715a04e5c-kube-api-access-zc8cn\") pod \"package-server-manager-77f986bd66-8zvdp\" (UID: \"a6c1fff3-93b2-4ea5-b74f-8f7715a04e5c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-8zvdp" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.233392 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-45tbw\" (UniqueName: \"kubernetes.io/projected/271e0654-9d86-4ec1-8c25-d345a8a1eb0a-kube-api-access-45tbw\") pod \"control-plane-machine-set-operator-75ffdb6fcd-nkx6r\" (UID: \"271e0654-9d86-4ec1-8c25-d345a8a1eb0a\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-nkx6r" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.233412 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xg6xt\" (UniqueName: \"kubernetes.io/projected/a651b3d8-5a1f-4da5-8d11-e6e3b1ef5d00-kube-api-access-xg6xt\") pod \"marketplace-operator-547dbd544d-5r9pr\" (UID: \"a651b3d8-5a1f-4da5-8d11-e6e3b1ef5d00\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-5r9pr" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.233427 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/dff88ce7-473b-4e36-ae15-98b61242704c-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-p7jcp\" (UID: \"dff88ce7-473b-4e36-ae15-98b61242704c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-p7jcp" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.233448 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/11f14664-de3b-4cae-af94-5367cc3f2f4b-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-2rnqz\" (UID: \"11f14664-de3b-4cae-af94-5367cc3f2f4b\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-2rnqz" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.233463 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a6c1fff3-93b2-4ea5-b74f-8f7715a04e5c-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-8zvdp\" (UID: \"a6c1fff3-93b2-4ea5-b74f-8f7715a04e5c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-8zvdp" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.234901 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fa95ab2-7bd1-49ba-bbe4-31209e7b7a65-config\") pod \"service-ca-operator-5b9c976747-qq2q6\" (UID: \"5fa95ab2-7bd1-49ba-bbe4-31209e7b7a65\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-qq2q6" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.235475 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5be2081-6a80-4521-a1dd-2e332352f29c-config\") pod \"kube-storage-version-migrator-operator-565b79b866-bg6nr\" (UID: \"a5be2081-6a80-4521-a1dd-2e332352f29c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-bg6nr" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.235897 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/8823ee71-944b-492d-8676-09a4f6e0103f-csi-data-dir\") pod \"csi-hostpathplugin-t9lqw\" (UID: \"8823ee71-944b-492d-8676-09a4f6e0103f\") " pod="hostpath-provisioner/csi-hostpathplugin-t9lqw" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.236480 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/26d1a4fa-1469-4128-bd56-c9a122b28068-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-g77tt\" (UID: \"26d1a4fa-1469-4128-bd56-c9a122b28068\") " pod="openshift-multus/cni-sysctl-allowlist-ds-g77tt" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.237121 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/26d1a4fa-1469-4128-bd56-c9a122b28068-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-g77tt\" (UID: \"26d1a4fa-1469-4128-bd56-c9a122b28068\") " pod="openshift-multus/cni-sysctl-allowlist-ds-g77tt" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.237190 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/8823ee71-944b-492d-8676-09a4f6e0103f-socket-dir\") pod \"csi-hostpathplugin-t9lqw\" (UID: \"8823ee71-944b-492d-8676-09a4f6e0103f\") " pod="hostpath-provisioner/csi-hostpathplugin-t9lqw" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.237675 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/271e0654-9d86-4ec1-8c25-d345a8a1eb0a-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-nkx6r\" (UID: \"271e0654-9d86-4ec1-8c25-d345a8a1eb0a\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-nkx6r" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.238460 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/11f14664-de3b-4cae-af94-5367cc3f2f4b-images\") pod \"machine-config-operator-67c9d58cbb-2rnqz\" (UID: \"11f14664-de3b-4cae-af94-5367cc3f2f4b\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-2rnqz" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.238578 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/8823ee71-944b-492d-8676-09a4f6e0103f-mountpoint-dir\") pod \"csi-hostpathplugin-t9lqw\" (UID: \"8823ee71-944b-492d-8676-09a4f6e0103f\") " pod="hostpath-provisioner/csi-hostpathplugin-t9lqw" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.239574 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cd261347-6a59-453c-836f-31c195e37417-config-volume\") pod \"dns-default-jcv4b\" (UID: \"cd261347-6a59-453c-836f-31c195e37417\") " pod="openshift-dns/dns-default-jcv4b" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.240230 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa4fb550-3d2a-4a19-8b93-c5e54e9b897a-service-ca-bundle\") pod \"router-default-68cf44c8b8-jrk8q\" (UID: \"aa4fb550-3d2a-4a19-8b93-c5e54e9b897a\") " pod="openshift-ingress/router-default-68cf44c8b8-jrk8q" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.243337 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/cd261347-6a59-453c-836f-31c195e37417-tmp-dir\") pod \"dns-default-jcv4b\" (UID: \"cd261347-6a59-453c-836f-31c195e37417\") " pod="openshift-dns/dns-default-jcv4b" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.244923 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3ba5de99-4b50-4027-b9c6-f1fbb61a7146-apiservice-cert\") pod \"packageserver-7d4fc7d867-bnqhm\" (UID: \"3ba5de99-4b50-4027-b9c6-f1fbb61a7146\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-bnqhm" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.245602 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/8823ee71-944b-492d-8676-09a4f6e0103f-plugins-dir\") pod \"csi-hostpathplugin-t9lqw\" (UID: \"8823ee71-944b-492d-8676-09a4f6e0103f\") " pod="hostpath-provisioner/csi-hostpathplugin-t9lqw" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.246682 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/97853351-9834-428a-b4b9-399da76c66be-tmpfs\") pod \"olm-operator-5cdf44d969-dnqp8\" (UID: \"97853351-9834-428a-b4b9-399da76c66be\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-dnqp8" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.247361 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/dff88ce7-473b-4e36-ae15-98b61242704c-tmpfs\") pod \"catalog-operator-75ff9f647d-p7jcp\" (UID: \"dff88ce7-473b-4e36-ae15-98b61242704c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-p7jcp" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.247688 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/26d1a4fa-1469-4128-bd56-c9a122b28068-ready\") pod \"cni-sysctl-allowlist-ds-g77tt\" (UID: \"26d1a4fa-1469-4128-bd56-c9a122b28068\") " pod="openshift-multus/cni-sysctl-allowlist-ds-g77tt" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.247734 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a651b3d8-5a1f-4da5-8d11-e6e3b1ef5d00-tmp\") pod \"marketplace-operator-547dbd544d-5r9pr\" (UID: \"a651b3d8-5a1f-4da5-8d11-e6e3b1ef5d00\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-5r9pr" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.249653 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c9bfbcd7-e344-4287-b4ac-aaf5f9a446e9-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-5wlqh\" (UID: \"c9bfbcd7-e344-4287-b4ac-aaf5f9a446e9\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-5wlqh" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.250023 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/3ba5de99-4b50-4027-b9c6-f1fbb61a7146-tmpfs\") pod \"packageserver-7d4fc7d867-bnqhm\" (UID: \"3ba5de99-4b50-4027-b9c6-f1fbb61a7146\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-bnqhm" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.250336 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7c693d48-122b-44a7-8257-f4f312e980aa-secret-volume\") pod \"collect-profiles-29482560-6bcnb\" (UID: \"7c693d48-122b-44a7-8257-f4f312e980aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482560-6bcnb" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.251334 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/485b5bf0-70af-4e4a-b766-d9e63a94395f-webhook-certs\") pod \"multus-admission-controller-69db94689b-wkjhb\" (UID: \"485b5bf0-70af-4e4a-b766-d9e63a94395f\") " pod="openshift-multus/multus-admission-controller-69db94689b-wkjhb" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.251585 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/8823ee71-944b-492d-8676-09a4f6e0103f-registration-dir\") pod \"csi-hostpathplugin-t9lqw\" (UID: \"8823ee71-944b-492d-8676-09a4f6e0103f\") " pod="hostpath-provisioner/csi-hostpathplugin-t9lqw" Jan 21 00:11:15 crc kubenswrapper[5118]: E0121 00:11:15.253022 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 00:11:15.75300761 +0000 UTC m=+131.077254618 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tlb84" (UID: "0d503143-f75b-40e6-b0e3-d1bd595a05ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.253458 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0d503143-f75b-40e6-b0e3-d1bd595a05ae-bound-sa-token\") pod \"image-registry-66587d64c8-tlb84\" (UID: \"0d503143-f75b-40e6-b0e3-d1bd595a05ae\") " pod="openshift-image-registry/image-registry-66587d64c8-tlb84" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.254255 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/11f14664-de3b-4cae-af94-5367cc3f2f4b-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-2rnqz\" (UID: \"11f14664-de3b-4cae-af94-5367cc3f2f4b\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-2rnqz" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.254482 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a651b3d8-5a1f-4da5-8d11-e6e3b1ef5d00-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-5r9pr\" (UID: \"a651b3d8-5a1f-4da5-8d11-e6e3b1ef5d00\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-5r9pr" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.258401 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c9bfbcd7-e344-4287-b4ac-aaf5f9a446e9-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-5wlqh\" (UID: \"c9bfbcd7-e344-4287-b4ac-aaf5f9a446e9\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-5wlqh" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.260600 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/11f14664-de3b-4cae-af94-5367cc3f2f4b-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-2rnqz\" (UID: \"11f14664-de3b-4cae-af94-5367cc3f2f4b\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-2rnqz" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.261373 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/aa4fb550-3d2a-4a19-8b93-c5e54e9b897a-metrics-certs\") pod \"router-default-68cf44c8b8-jrk8q\" (UID: \"aa4fb550-3d2a-4a19-8b93-c5e54e9b897a\") " pod="openshift-ingress/router-default-68cf44c8b8-jrk8q" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.271565 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/dff88ce7-473b-4e36-ae15-98b61242704c-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-p7jcp\" (UID: \"dff88ce7-473b-4e36-ae15-98b61242704c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-p7jcp" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.272309 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/8ec6d719-5d37-4de1-9afe-3e01bfe8d640-signing-cabundle\") pod \"service-ca-74545575db-p65gs\" (UID: \"8ec6d719-5d37-4de1-9afe-3e01bfe8d640\") " pod="openshift-service-ca/service-ca-74545575db-p65gs" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.276030 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a651b3d8-5a1f-4da5-8d11-e6e3b1ef5d00-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-5r9pr\" (UID: \"a651b3d8-5a1f-4da5-8d11-e6e3b1ef5d00\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-5r9pr" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.276077 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-96762\" (UniqueName: \"kubernetes.io/projected/6f6f802d-add4-4e9d-bbbe-d3fd2eefd62a-kube-api-access-96762\") pod \"console-64d44f6ddf-xbtg4\" (UID: \"6f6f802d-add4-4e9d-bbbe-d3fd2eefd62a\") " pod="openshift-console/console-64d44f6ddf-xbtg4" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.276223 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/97853351-9834-428a-b4b9-399da76c66be-srv-cert\") pod \"olm-operator-5cdf44d969-dnqp8\" (UID: \"97853351-9834-428a-b4b9-399da76c66be\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-dnqp8" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.286078 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/946b14ca-c6a8-4020-bdb0-01d5ca69b536-cert\") pod \"ingress-canary-4vjlk\" (UID: \"946b14ca-c6a8-4020-bdb0-01d5ca69b536\") " pod="openshift-ingress-canary/ingress-canary-4vjlk" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.286823 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a5be2081-6a80-4521-a1dd-2e332352f29c-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-bg6nr\" (UID: \"a5be2081-6a80-4521-a1dd-2e332352f29c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-bg6nr" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.312669 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-pruner-29482560-n7qwb"] Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.313619 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3ba5de99-4b50-4027-b9c6-f1fbb61a7146-webhook-cert\") pod \"packageserver-7d4fc7d867-bnqhm\" (UID: \"3ba5de99-4b50-4027-b9c6-f1fbb61a7146\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-bnqhm" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.314652 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a6c1fff3-93b2-4ea5-b74f-8f7715a04e5c-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-8zvdp\" (UID: \"a6c1fff3-93b2-4ea5-b74f-8f7715a04e5c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-8zvdp" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.314845 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/aa4fb550-3d2a-4a19-8b93-c5e54e9b897a-stats-auth\") pod \"router-default-68cf44c8b8-jrk8q\" (UID: \"aa4fb550-3d2a-4a19-8b93-c5e54e9b897a\") " pod="openshift-ingress/router-default-68cf44c8b8-jrk8q" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.314919 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/59298912-50d6-49ab-82d9-625a7df65661-node-bootstrap-token\") pod \"machine-config-server-f7rf5\" (UID: \"59298912-50d6-49ab-82d9-625a7df65661\") " pod="openshift-machine-config-operator/machine-config-server-f7rf5" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.314956 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5fa95ab2-7bd1-49ba-bbe4-31209e7b7a65-serving-cert\") pod \"service-ca-operator-5b9c976747-qq2q6\" (UID: \"5fa95ab2-7bd1-49ba-bbe4-31209e7b7a65\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-qq2q6" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.315126 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/97853351-9834-428a-b4b9-399da76c66be-profile-collector-cert\") pod \"olm-operator-5cdf44d969-dnqp8\" (UID: \"97853351-9834-428a-b4b9-399da76c66be\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-dnqp8" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.315867 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/8ec6d719-5d37-4de1-9afe-3e01bfe8d640-signing-key\") pod \"service-ca-74545575db-p65gs\" (UID: \"8ec6d719-5d37-4de1-9afe-3e01bfe8d640\") " pod="openshift-service-ca/service-ca-74545575db-p65gs" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.334507 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-65b6cccf98-trbkq" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.340006 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 00:11:15 crc kubenswrapper[5118]: E0121 00:11:15.340622 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 00:11:15.840600336 +0000 UTC m=+131.164847354 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.349461 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cd261347-6a59-453c-836f-31c195e37417-metrics-tls\") pod \"dns-default-jcv4b\" (UID: \"cd261347-6a59-453c-836f-31c195e37417\") " pod="openshift-dns/dns-default-jcv4b" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.353103 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/59298912-50d6-49ab-82d9-625a7df65661-certs\") pod \"machine-config-server-f7rf5\" (UID: \"59298912-50d6-49ab-82d9-625a7df65661\") " pod="openshift-machine-config-operator/machine-config-server-f7rf5" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.354775 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-747b44746d-pdh68"] Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.373448 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wltq5\" (UniqueName: \"kubernetes.io/projected/cd261347-6a59-453c-836f-31c195e37417-kube-api-access-wltq5\") pod \"dns-default-jcv4b\" (UID: \"cd261347-6a59-453c-836f-31c195e37417\") " pod="openshift-dns/dns-default-jcv4b" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.375814 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlc2x\" (UniqueName: \"kubernetes.io/projected/c9bfbcd7-e344-4287-b4ac-aaf5f9a446e9-kube-api-access-qlc2x\") pod \"machine-config-controller-f9cdd68f7-5wlqh\" (UID: \"c9bfbcd7-e344-4287-b4ac-aaf5f9a446e9\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-5wlqh" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.378732 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/aa4fb550-3d2a-4a19-8b93-c5e54e9b897a-default-certificate\") pod \"router-default-68cf44c8b8-jrk8q\" (UID: \"aa4fb550-3d2a-4a19-8b93-c5e54e9b897a\") " pod="openshift-ingress/router-default-68cf44c8b8-jrk8q" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.380031 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/dff88ce7-473b-4e36-ae15-98b61242704c-srv-cert\") pod \"catalog-operator-75ff9f647d-p7jcp\" (UID: \"dff88ce7-473b-4e36-ae15-98b61242704c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-p7jcp" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.380738 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-65tf4\" (UniqueName: \"kubernetes.io/projected/aa4fb550-3d2a-4a19-8b93-c5e54e9b897a-kube-api-access-65tf4\") pod \"router-default-68cf44c8b8-jrk8q\" (UID: \"aa4fb550-3d2a-4a19-8b93-c5e54e9b897a\") " pod="openshift-ingress/router-default-68cf44c8b8-jrk8q" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.397432 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4jzq\" (UniqueName: \"kubernetes.io/projected/8823ee71-944b-492d-8676-09a4f6e0103f-kube-api-access-c4jzq\") pod \"csi-hostpathplugin-t9lqw\" (UID: \"8823ee71-944b-492d-8676-09a4f6e0103f\") " pod="hostpath-provisioner/csi-hostpathplugin-t9lqw" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.406492 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d44f6ddf-xbtg4" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.408607 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hn28v\" (UniqueName: \"kubernetes.io/projected/7c693d48-122b-44a7-8257-f4f312e980aa-kube-api-access-hn28v\") pod \"collect-profiles-29482560-6bcnb\" (UID: \"7c693d48-122b-44a7-8257-f4f312e980aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482560-6bcnb" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.417659 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7qvb\" (UniqueName: \"kubernetes.io/projected/dff88ce7-473b-4e36-ae15-98b61242704c-kube-api-access-r7qvb\") pod \"catalog-operator-75ff9f647d-p7jcp\" (UID: \"dff88ce7-473b-4e36-ae15-98b61242704c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-p7jcp" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.418058 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ppts\" (UniqueName: \"kubernetes.io/projected/5fa95ab2-7bd1-49ba-bbe4-31209e7b7a65-kube-api-access-8ppts\") pod \"service-ca-operator-5b9c976747-qq2q6\" (UID: \"5fa95ab2-7bd1-49ba-bbe4-31209e7b7a65\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-qq2q6" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.434202 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dr85\" (UniqueName: \"kubernetes.io/projected/3ba5de99-4b50-4027-b9c6-f1fbb61a7146-kube-api-access-7dr85\") pod \"packageserver-7d4fc7d867-bnqhm\" (UID: \"3ba5de99-4b50-4027-b9c6-f1fbb61a7146\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-bnqhm" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.443051 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tlb84\" (UID: \"0d503143-f75b-40e6-b0e3-d1bd595a05ae\") " pod="openshift-image-registry/image-registry-66587d64c8-tlb84" Jan 21 00:11:15 crc kubenswrapper[5118]: E0121 00:11:15.443534 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 00:11:15.943520298 +0000 UTC m=+131.267767316 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tlb84" (UID: "0d503143-f75b-40e6-b0e3-d1bd595a05ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.453135 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2wrx\" (UniqueName: \"kubernetes.io/projected/a5be2081-6a80-4521-a1dd-2e332352f29c-kube-api-access-v2wrx\") pod \"kube-storage-version-migrator-operator-565b79b866-bg6nr\" (UID: \"a5be2081-6a80-4521-a1dd-2e332352f29c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-bg6nr" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.477653 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-45tbw\" (UniqueName: \"kubernetes.io/projected/271e0654-9d86-4ec1-8c25-d345a8a1eb0a-kube-api-access-45tbw\") pod \"control-plane-machine-set-operator-75ffdb6fcd-nkx6r\" (UID: \"271e0654-9d86-4ec1-8c25-d345a8a1eb0a\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-nkx6r" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.493586 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-t9lqw" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.500347 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482560-6bcnb" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.500441 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zc8cn\" (UniqueName: \"kubernetes.io/projected/a6c1fff3-93b2-4ea5-b74f-8f7715a04e5c-kube-api-access-zc8cn\") pod \"package-server-manager-77f986bd66-8zvdp\" (UID: \"a6c1fff3-93b2-4ea5-b74f-8f7715a04e5c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-8zvdp" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.501007 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-htq2h\" (UniqueName: \"kubernetes.io/projected/59298912-50d6-49ab-82d9-625a7df65661-kube-api-access-htq2h\") pod \"machine-config-server-f7rf5\" (UID: \"59298912-50d6-49ab-82d9-625a7df65661\") " pod="openshift-machine-config-operator/machine-config-server-f7rf5" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.540482 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftz2k\" (UniqueName: \"kubernetes.io/projected/f5911eba-6406-44b4-868f-a47787c95fdf-kube-api-access-ftz2k\") pod \"migrator-866fcbc849-s4czg\" (UID: \"f5911eba-6406-44b4-868f-a47787c95fdf\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-s4czg" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.544248 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 00:11:15 crc kubenswrapper[5118]: E0121 00:11:15.544365 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 00:11:16.044344915 +0000 UTC m=+131.368591933 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.544663 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tlb84\" (UID: \"0d503143-f75b-40e6-b0e3-d1bd595a05ae\") " pod="openshift-image-registry/image-registry-66587d64c8-tlb84" Jan 21 00:11:15 crc kubenswrapper[5118]: E0121 00:11:15.544946 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 00:11:16.044938491 +0000 UTC m=+131.369185509 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tlb84" (UID: "0d503143-f75b-40e6-b0e3-d1bd595a05ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.551761 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8mlf\" (UniqueName: \"kubernetes.io/projected/97853351-9834-428a-b4b9-399da76c66be-kube-api-access-w8mlf\") pod \"olm-operator-5cdf44d969-dnqp8\" (UID: \"97853351-9834-428a-b4b9-399da76c66be\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-dnqp8" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.553043 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wchdj\" (UniqueName: \"kubernetes.io/projected/26d1a4fa-1469-4128-bd56-c9a122b28068-kube-api-access-wchdj\") pod \"cni-sysctl-allowlist-ds-g77tt\" (UID: \"26d1a4fa-1469-4128-bd56-c9a122b28068\") " pod="openshift-multus/cni-sysctl-allowlist-ds-g77tt" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.563315 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-bg6nr" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.570422 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-68cf44c8b8-jrk8q" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.580576 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-5wlqh" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.586008 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-dnqp8" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.603090 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xg6xt\" (UniqueName: \"kubernetes.io/projected/a651b3d8-5a1f-4da5-8d11-e6e3b1ef5d00-kube-api-access-xg6xt\") pod \"marketplace-operator-547dbd544d-5r9pr\" (UID: \"a651b3d8-5a1f-4da5-8d11-e6e3b1ef5d00\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-5r9pr" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.603785 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6kcg\" (UniqueName: \"kubernetes.io/projected/946b14ca-c6a8-4020-bdb0-01d5ca69b536-kube-api-access-g6kcg\") pod \"ingress-canary-4vjlk\" (UID: \"946b14ca-c6a8-4020-bdb0-01d5ca69b536\") " pod="openshift-ingress-canary/ingress-canary-4vjlk" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.608547 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-s4czg" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.632814 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-p7jcp" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.634384 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-nkx6r" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.640090 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-55qsh\" (UniqueName: \"kubernetes.io/projected/485b5bf0-70af-4e4a-b766-d9e63a94395f-kube-api-access-55qsh\") pod \"multus-admission-controller-69db94689b-wkjhb\" (UID: \"485b5bf0-70af-4e4a-b766-d9e63a94395f\") " pod="openshift-multus/multus-admission-controller-69db94689b-wkjhb" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.645442 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-7lpxz" event={"ID":"6796e6ff-3d28-4061-a0b4-cd8088da6919","Type":"ContainerStarted","Data":"1ec8ad430458d17a37fdeb3a902ea08cbf1e2dfc03c2d4cc8230bb8a24b71a83"} Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.645918 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 00:11:15 crc kubenswrapper[5118]: E0121 00:11:15.646285 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 00:11:16.146255601 +0000 UTC m=+131.470502609 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.646416 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tlb84\" (UID: \"0d503143-f75b-40e6-b0e3-d1bd595a05ae\") " pod="openshift-image-registry/image-registry-66587d64c8-tlb84" Jan 21 00:11:15 crc kubenswrapper[5118]: E0121 00:11:15.646882 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 00:11:16.146865157 +0000 UTC m=+131.471112165 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tlb84" (UID: "0d503143-f75b-40e6-b0e3-d1bd595a05ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.648572 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-cnnpj" event={"ID":"609947aa-6e7c-439b-a7a7-7d06f8ab4f1c","Type":"ContainerStarted","Data":"5ac1d3f7ab11b457becf63ceb8e436082046d9cce8f8f0c0c870ff98bfc81a60"} Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.649287 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bs62m\" (UniqueName: \"kubernetes.io/projected/8ec6d719-5d37-4de1-9afe-3e01bfe8d640-kube-api-access-bs62m\") pod \"service-ca-74545575db-p65gs\" (UID: \"8ec6d719-5d37-4de1-9afe-3e01bfe8d640\") " pod="openshift-service-ca/service-ca-74545575db-p65gs" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.650050 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-5r9pr" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.653311 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-6d476" event={"ID":"bd4c6f53-d565-473d-9d09-b5190fa3d71a","Type":"ContainerStarted","Data":"202da088ec15ef57a6b4e75000304f864982c45ca980619c9a3892fa7df90f2c"} Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.657413 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-5gv2n" event={"ID":"19280e75-8f04-47d1-bc42-124082dfd247","Type":"ContainerStarted","Data":"7c18869c859528ea916fd3e1d6ac70a3b59c0491590f8c2bad1b1e2b78cc4083"} Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.658721 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-66458b6674-5gv2n" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.659345 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-8t6fc" event={"ID":"4d8423b3-e68c-4083-859f-e89f705f28bd","Type":"ContainerStarted","Data":"876c58cd1b48a5de4562f5a8b4e07f3887ac36b3e968c35b888d6cfd6c7d62c6"} Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.660485 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvjxk\" (UniqueName: \"kubernetes.io/projected/11f14664-de3b-4cae-af94-5367cc3f2f4b-kube-api-access-rvjxk\") pod \"machine-config-operator-67c9d58cbb-2rnqz\" (UID: \"11f14664-de3b-4cae-af94-5367cc3f2f4b\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-2rnqz" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.663643 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-bnqhm" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.673622 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-jcv4b" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.679422 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-qq2q6" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.690845 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-8zvdp" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.693558 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-5zl5v" event={"ID":"79567875-e72c-4685-8919-03cda9a6f644","Type":"ContainerStarted","Data":"863b07f4b6519aeff5eb128873af649908be220bdd7df98c0f6f4e24c2c1e816"} Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.693596 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-5zl5v" event={"ID":"79567875-e72c-4685-8919-03cda9a6f644","Type":"ContainerStarted","Data":"810fe0e8748b3e2f680ed2249ac8f2238743e8476e657ddd502b70ffbdb9d7d0"} Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.701581 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-4vjlk" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.711456 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-f7rf5" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.753086 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.753317 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-g77tt" Jan 21 00:11:15 crc kubenswrapper[5118]: E0121 00:11:15.753417 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 00:11:16.253399786 +0000 UTC m=+131.577646804 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.757137 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tlb84\" (UID: \"0d503143-f75b-40e6-b0e3-d1bd595a05ae\") " pod="openshift-image-registry/image-registry-66587d64c8-tlb84" Jan 21 00:11:15 crc kubenswrapper[5118]: E0121 00:11:15.757461 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 00:11:16.257445093 +0000 UTC m=+131.581692111 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tlb84" (UID: "0d503143-f75b-40e6-b0e3-d1bd595a05ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.775440 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-74545575db-p65gs" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.781853 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-s55xm" event={"ID":"1968a714-512b-40f9-a302-f8905b0855fd","Type":"ContainerStarted","Data":"8fea837de3e84be13255f5a57032997f075300f5a76a37e708bdcd543664c862"} Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.782449 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-s55xm" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.802583 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-r2gm9" event={"ID":"1202d380-a207-455c-8bd8-2b82e7974afa","Type":"ContainerStarted","Data":"a959cf7abca020583b6514c631af4471a9188ca413bab51cd6553597f503c9e5"} Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.827485 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-s55xm" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.858175 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 00:11:15 crc kubenswrapper[5118]: E0121 00:11:15.862249 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 00:11:16.362223375 +0000 UTC m=+131.686470383 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.908956 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-jnbtq"] Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.915000 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-89cvg"] Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.917659 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-69db94689b-wkjhb" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.944530 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-2rnqz" Jan 21 00:11:15 crc kubenswrapper[5118]: I0121 00:11:15.963678 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tlb84\" (UID: \"0d503143-f75b-40e6-b0e3-d1bd595a05ae\") " pod="openshift-image-registry/image-registry-66587d64c8-tlb84" Jan 21 00:11:15 crc kubenswrapper[5118]: E0121 00:11:15.964398 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 00:11:16.464363807 +0000 UTC m=+131.788610835 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tlb84" (UID: "0d503143-f75b-40e6-b0e3-d1bd595a05ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:16 crc kubenswrapper[5118]: I0121 00:11:16.009985 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-66458b6674-5gv2n" Jan 21 00:11:16 crc kubenswrapper[5118]: I0121 00:11:16.059882 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-w2r2c"] Jan 21 00:11:16 crc kubenswrapper[5118]: I0121 00:11:16.065030 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-c8s6j"] Jan 21 00:11:16 crc kubenswrapper[5118]: I0121 00:11:16.066712 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 00:11:16 crc kubenswrapper[5118]: E0121 00:11:16.067297 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 00:11:16.56727665 +0000 UTC m=+131.891523668 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:16 crc kubenswrapper[5118]: I0121 00:11:16.076900 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-tgjx7"] Jan 21 00:11:16 crc kubenswrapper[5118]: I0121 00:11:16.120758 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-67c89758df-5ds28"] Jan 21 00:11:16 crc kubenswrapper[5118]: I0121 00:11:16.123880 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-9zfdd"] Jan 21 00:11:16 crc kubenswrapper[5118]: I0121 00:11:16.136717 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-jdqmz"] Jan 21 00:11:16 crc kubenswrapper[5118]: I0121 00:11:16.169141 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tlb84\" (UID: \"0d503143-f75b-40e6-b0e3-d1bd595a05ae\") " pod="openshift-image-registry/image-registry-66587d64c8-tlb84" Jan 21 00:11:16 crc kubenswrapper[5118]: E0121 00:11:16.169815 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 00:11:16.669764771 +0000 UTC m=+131.994011799 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tlb84" (UID: "0d503143-f75b-40e6-b0e3-d1bd595a05ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:16 crc kubenswrapper[5118]: I0121 00:11:16.272641 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 00:11:16 crc kubenswrapper[5118]: E0121 00:11:16.273369 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 00:11:16.773351581 +0000 UTC m=+132.097598599 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:16 crc kubenswrapper[5118]: W0121 00:11:16.318967 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd042745d_98a0_44c8_ac92_7704d8b43b84.slice/crio-091bc322dc43a9701a13e3fee643493e47b70224afa192340afe86f133e7ee5c WatchSource:0}: Error finding container 091bc322dc43a9701a13e3fee643493e47b70224afa192340afe86f133e7ee5c: Status 404 returned error can't find the container with id 091bc322dc43a9701a13e3fee643493e47b70224afa192340afe86f133e7ee5c Jan 21 00:11:16 crc kubenswrapper[5118]: I0121 00:11:16.374219 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tlb84\" (UID: \"0d503143-f75b-40e6-b0e3-d1bd595a05ae\") " pod="openshift-image-registry/image-registry-66587d64c8-tlb84" Jan 21 00:11:16 crc kubenswrapper[5118]: E0121 00:11:16.374629 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 00:11:16.87461272 +0000 UTC m=+132.198859738 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tlb84" (UID: "0d503143-f75b-40e6-b0e3-d1bd595a05ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:16 crc kubenswrapper[5118]: I0121 00:11:16.458188 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-s55xm" podStartSLOduration=110.458151838 podStartE2EDuration="1m50.458151838s" podCreationTimestamp="2026-01-21 00:09:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:11:16.457670205 +0000 UTC m=+131.781917223" watchObservedRunningTime="2026-01-21 00:11:16.458151838 +0000 UTC m=+131.782398866" Jan 21 00:11:16 crc kubenswrapper[5118]: I0121 00:11:16.475102 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 00:11:16 crc kubenswrapper[5118]: E0121 00:11:16.475461 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 00:11:16.975441457 +0000 UTC m=+132.299688475 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:16 crc kubenswrapper[5118]: W0121 00:11:16.557307 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba04635f_4c5f_4669_af58_97627beae1b2.slice/crio-86f0fa354f6bbad8a0290688c1079df9d4272818acbf946f0627f8c474426fd6 WatchSource:0}: Error finding container 86f0fa354f6bbad8a0290688c1079df9d4272818acbf946f0627f8c474426fd6: Status 404 returned error can't find the container with id 86f0fa354f6bbad8a0290688c1079df9d4272818acbf946f0627f8c474426fd6 Jan 21 00:11:16 crc kubenswrapper[5118]: I0121 00:11:16.565494 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-t9lqw"] Jan 21 00:11:16 crc kubenswrapper[5118]: I0121 00:11:16.578105 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tlb84\" (UID: \"0d503143-f75b-40e6-b0e3-d1bd595a05ae\") " pod="openshift-image-registry/image-registry-66587d64c8-tlb84" Jan 21 00:11:16 crc kubenswrapper[5118]: E0121 00:11:16.578514 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 00:11:17.078499813 +0000 UTC m=+132.402746831 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tlb84" (UID: "0d503143-f75b-40e6-b0e3-d1bd595a05ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:16 crc kubenswrapper[5118]: I0121 00:11:16.633680 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-65b6cccf98-trbkq" podStartSLOduration=110.633660088 podStartE2EDuration="1m50.633660088s" podCreationTimestamp="2026-01-21 00:09:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:11:16.594833597 +0000 UTC m=+131.919080635" watchObservedRunningTime="2026-01-21 00:11:16.633660088 +0000 UTC m=+131.957907106" Jan 21 00:11:16 crc kubenswrapper[5118]: I0121 00:11:16.635837 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-755bb95488-r2gm9" podStartSLOduration=110.635825315 podStartE2EDuration="1m50.635825315s" podCreationTimestamp="2026-01-21 00:09:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:11:16.631473609 +0000 UTC m=+131.955720627" watchObservedRunningTime="2026-01-21 00:11:16.635825315 +0000 UTC m=+131.960072333" Jan 21 00:11:16 crc kubenswrapper[5118]: I0121 00:11:16.679126 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 00:11:16 crc kubenswrapper[5118]: E0121 00:11:16.681145 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 00:11:17.181112877 +0000 UTC m=+132.505359895 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:16 crc kubenswrapper[5118]: I0121 00:11:16.681612 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tlb84\" (UID: \"0d503143-f75b-40e6-b0e3-d1bd595a05ae\") " pod="openshift-image-registry/image-registry-66587d64c8-tlb84" Jan 21 00:11:16 crc kubenswrapper[5118]: E0121 00:11:16.682093 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 00:11:17.182084593 +0000 UTC m=+132.506331611 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tlb84" (UID: "0d503143-f75b-40e6-b0e3-d1bd595a05ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:16 crc kubenswrapper[5118]: I0121 00:11:16.794004 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 00:11:16 crc kubenswrapper[5118]: E0121 00:11:16.794924 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 00:11:17.294901249 +0000 UTC m=+132.619148267 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:16 crc kubenswrapper[5118]: I0121 00:11:16.819560 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-9zfdd" event={"ID":"9c52d3a9-6263-4edb-9071-1d5dc43c7197","Type":"ContainerStarted","Data":"5d97a69259a7ffc364c1698e3559bd5dfa9c83acabfb03c24c66ede9d1be5db3"} Jan 21 00:11:16 crc kubenswrapper[5118]: I0121 00:11:16.824071 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-tgjx7" event={"ID":"d042745d-98a0-44c8-ac92-7704d8b43b84","Type":"ContainerStarted","Data":"091bc322dc43a9701a13e3fee643493e47b70224afa192340afe86f133e7ee5c"} Jan 21 00:11:16 crc kubenswrapper[5118]: I0121 00:11:16.825662 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-cnnpj" event={"ID":"609947aa-6e7c-439b-a7a7-7d06f8ab4f1c","Type":"ContainerStarted","Data":"a071a6500eae78fadae698d6ba1f0bfe66949516769fdf6162a265c243a11015"} Jan 21 00:11:16 crc kubenswrapper[5118]: I0121 00:11:16.831203 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-5zl5v" podStartSLOduration=110.831183632 podStartE2EDuration="1m50.831183632s" podCreationTimestamp="2026-01-21 00:09:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:11:16.818605948 +0000 UTC m=+132.142852966" watchObservedRunningTime="2026-01-21 00:11:16.831183632 +0000 UTC m=+132.155430650" Jan 21 00:11:16 crc kubenswrapper[5118]: I0121 00:11:16.839112 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-cfnkd"] Jan 21 00:11:16 crc kubenswrapper[5118]: I0121 00:11:16.848619 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-pdh68" event={"ID":"f2431df6-6390-4fb8-b13e-56750ad2fed4","Type":"ContainerStarted","Data":"cb5e3359c38878e0a0c7ce6b59282e7aece9b7c48d349021872ff907844c8c72"} Jan 21 00:11:16 crc kubenswrapper[5118]: I0121 00:11:16.848660 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-pdh68" event={"ID":"f2431df6-6390-4fb8-b13e-56750ad2fed4","Type":"ContainerStarted","Data":"a13558933948986715148a6a3ba316c19415ef32d4fc35a514a9ef22c90e281a"} Jan 21 00:11:16 crc kubenswrapper[5118]: I0121 00:11:16.849352 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/downloads-747b44746d-pdh68" Jan 21 00:11:16 crc kubenswrapper[5118]: I0121 00:11:16.851472 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29482560-n7qwb" event={"ID":"ae767afd-59d5-4c04-9ecc-f9ae7b317698","Type":"ContainerStarted","Data":"feaa79e58645e3c369904a4c700a10edd742db3b375824cd728bb262eb7a3678"} Jan 21 00:11:16 crc kubenswrapper[5118]: I0121 00:11:16.851517 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29482560-n7qwb" event={"ID":"ae767afd-59d5-4c04-9ecc-f9ae7b317698","Type":"ContainerStarted","Data":"c5fb597f64097be0d3b8fbd22e355dc53d92d2fedac71dc8e7f20e463d206c0d"} Jan 21 00:11:16 crc kubenswrapper[5118]: I0121 00:11:16.855657 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-mn7c4" podStartSLOduration=110.855641651 podStartE2EDuration="1m50.855641651s" podCreationTimestamp="2026-01-21 00:09:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:11:16.853447243 +0000 UTC m=+132.177694261" watchObservedRunningTime="2026-01-21 00:11:16.855641651 +0000 UTC m=+132.179888669" Jan 21 00:11:16 crc kubenswrapper[5118]: I0121 00:11:16.859012 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d44f6ddf-xbtg4"] Jan 21 00:11:16 crc kubenswrapper[5118]: I0121 00:11:16.859105 5118 patch_prober.go:28] interesting pod/downloads-747b44746d-pdh68 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.31:8080/\": dial tcp 10.217.0.31:8080: connect: connection refused" start-of-body= Jan 21 00:11:16 crc kubenswrapper[5118]: I0121 00:11:16.859136 5118 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-pdh68" podUID="f2431df6-6390-4fb8-b13e-56750ad2fed4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.31:8080/\": dial tcp 10.217.0.31:8080: connect: connection refused" Jan 21 00:11:16 crc kubenswrapper[5118]: I0121 00:11:16.887583 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-67c89758df-5ds28" event={"ID":"ba04635f-4c5f-4669-af58-97627beae1b2","Type":"ContainerStarted","Data":"86f0fa354f6bbad8a0290688c1079df9d4272818acbf946f0627f8c474426fd6"} Jan 21 00:11:16 crc kubenswrapper[5118]: I0121 00:11:16.898690 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tlb84\" (UID: \"0d503143-f75b-40e6-b0e3-d1bd595a05ae\") " pod="openshift-image-registry/image-registry-66587d64c8-tlb84" Jan 21 00:11:16 crc kubenswrapper[5118]: E0121 00:11:16.899056 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 00:11:17.399042474 +0000 UTC m=+132.723289492 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tlb84" (UID: "0d503143-f75b-40e6-b0e3-d1bd595a05ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:16 crc kubenswrapper[5118]: I0121 00:11:16.943009 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-f7rf5" event={"ID":"59298912-50d6-49ab-82d9-625a7df65661","Type":"ContainerStarted","Data":"e3b0fcb45410d01c458a1f4c39ec57d1af8b9da285b724ba7fb9bb65c93ccceb"} Jan 21 00:11:16 crc kubenswrapper[5118]: I0121 00:11:16.962541 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-t9lqw" event={"ID":"8823ee71-944b-492d-8676-09a4f6e0103f","Type":"ContainerStarted","Data":"74efb34475addac1d0e3c1d6f5acb4a61d52485987224280cccf984a5d8a8377"} Jan 21 00:11:16 crc kubenswrapper[5118]: I0121 00:11:16.975493 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-c8s6j" event={"ID":"65be3f94-f1d5-4ebb-933f-216e1650f309","Type":"ContainerStarted","Data":"e54fefe90c36fa1937845c1dcff64485d26bf34204e85c2836267bb3f318cf7d"} Jan 21 00:11:17 crc kubenswrapper[5118]: I0121 00:11:17.005371 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 00:11:17 crc kubenswrapper[5118]: I0121 00:11:17.005848 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-g77tt" event={"ID":"26d1a4fa-1469-4128-bd56-c9a122b28068","Type":"ContainerStarted","Data":"b17a1bc2007aef7c810b2739ebfbd9a82c6711e46c827105e7ec831963ed0a27"} Jan 21 00:11:17 crc kubenswrapper[5118]: E0121 00:11:17.005867 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 00:11:17.505838709 +0000 UTC m=+132.830085727 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:17 crc kubenswrapper[5118]: I0121 00:11:17.006055 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tlb84\" (UID: \"0d503143-f75b-40e6-b0e3-d1bd595a05ae\") " pod="openshift-image-registry/image-registry-66587d64c8-tlb84" Jan 21 00:11:17 crc kubenswrapper[5118]: E0121 00:11:17.007306 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 00:11:17.507297108 +0000 UTC m=+132.831544126 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tlb84" (UID: "0d503143-f75b-40e6-b0e3-d1bd595a05ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:17 crc kubenswrapper[5118]: I0121 00:11:17.015285 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-bg6nr"] Jan 21 00:11:17 crc kubenswrapper[5118]: I0121 00:11:17.024352 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-5wlqh"] Jan 21 00:11:17 crc kubenswrapper[5118]: I0121 00:11:17.030073 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-68cf44c8b8-jrk8q" event={"ID":"aa4fb550-3d2a-4a19-8b93-c5e54e9b897a","Type":"ContainerStarted","Data":"5a64caf27de6df3d18bc11fae06544e5b13ce29bb29ee407f099a587e62211a4"} Jan 21 00:11:17 crc kubenswrapper[5118]: I0121 00:11:17.030125 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-68cf44c8b8-jrk8q" event={"ID":"aa4fb550-3d2a-4a19-8b93-c5e54e9b897a","Type":"ContainerStarted","Data":"fe009b2721b7e5e1cc1362c979b9e24a353c007078117f3cfee90b034386a4c5"} Jan 21 00:11:17 crc kubenswrapper[5118]: I0121 00:11:17.031785 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-nkx6r"] Jan 21 00:11:17 crc kubenswrapper[5118]: I0121 00:11:17.038063 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-s4czg"] Jan 21 00:11:17 crc kubenswrapper[5118]: I0121 00:11:17.042585 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-p7jcp"] Jan 21 00:11:17 crc kubenswrapper[5118]: I0121 00:11:17.047044 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-bnqhm"] Jan 21 00:11:17 crc kubenswrapper[5118]: I0121 00:11:17.063810 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-jdqmz" event={"ID":"6c9c3fd8-7f40-4eb4-9a40-6e13fec6579f","Type":"ContainerStarted","Data":"7a76b97163e3005b289f0783ab813ff29376df3da6b579201fdcadc8e01da61c"} Jan 21 00:11:17 crc kubenswrapper[5118]: I0121 00:11:17.073182 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-w2r2c" event={"ID":"940d93fe-9ecf-4274-9caf-6123a0ce203c","Type":"ContainerStarted","Data":"06de1c96c4b0373d57270d91d07663e73e839dc697827f3eb1e674bf3d1b5c56"} Jan 21 00:11:17 crc kubenswrapper[5118]: I0121 00:11:17.078937 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-66458b6674-5gv2n" podStartSLOduration=111.078909679 podStartE2EDuration="1m51.078909679s" podCreationTimestamp="2026-01-21 00:09:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:11:17.074443331 +0000 UTC m=+132.398690369" watchObservedRunningTime="2026-01-21 00:11:17.078909679 +0000 UTC m=+132.403156707" Jan 21 00:11:17 crc kubenswrapper[5118]: I0121 00:11:17.092517 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-6d476" event={"ID":"bd4c6f53-d565-473d-9d09-b5190fa3d71a","Type":"ContainerStarted","Data":"c43d72ca949139a3c59c38ceb24d519d6c8c6fa78ef8bdc9b7a38d86b6641b12"} Jan 21 00:11:17 crc kubenswrapper[5118]: I0121 00:11:17.108656 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 00:11:17 crc kubenswrapper[5118]: E0121 00:11:17.110326 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 00:11:17.610304483 +0000 UTC m=+132.934551501 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:17 crc kubenswrapper[5118]: I0121 00:11:17.129590 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482560-6bcnb"] Jan 21 00:11:17 crc kubenswrapper[5118]: I0121 00:11:17.142573 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-jnbtq" event={"ID":"39000d94-11b8-42ff-a127-2136d0f2cc0b","Type":"ContainerStarted","Data":"35dfaa87417d1edf094e5f81954b5078a02fb688b9ca42944453cb26c58a5983"} Jan 21 00:11:17 crc kubenswrapper[5118]: I0121 00:11:17.150323 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-89cvg" event={"ID":"1f2cd766-1945-4e3d-aa8a-4045eacb2ff8","Type":"ContainerStarted","Data":"cbfebd81f79f090f630f36c89a0475aa16892596c265bc45bde076b1f8f35e15"} Jan 21 00:11:17 crc kubenswrapper[5118]: I0121 00:11:17.158602 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-89cvg" event={"ID":"1f2cd766-1945-4e3d-aa8a-4045eacb2ff8","Type":"ContainerStarted","Data":"f11cd3955d50e1020668bfbb6c37635982f7447f6db3c06e1a34ec38bce4b906"} Jan 21 00:11:17 crc kubenswrapper[5118]: I0121 00:11:17.162388 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-qq2q6"] Jan 21 00:11:17 crc kubenswrapper[5118]: I0121 00:11:17.188727 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-7lpxz" event={"ID":"6796e6ff-3d28-4061-a0b4-cd8088da6919","Type":"ContainerStarted","Data":"aa49c65914dd69c7f70b74981d2d6558c77fb7f3893ba03e652c97b2f36b79b4"} Jan 21 00:11:17 crc kubenswrapper[5118]: I0121 00:11:17.197477 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-dnqp8"] Jan 21 00:11:17 crc kubenswrapper[5118]: I0121 00:11:17.211980 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tlb84\" (UID: \"0d503143-f75b-40e6-b0e3-d1bd595a05ae\") " pod="openshift-image-registry/image-registry-66587d64c8-tlb84" Jan 21 00:11:17 crc kubenswrapper[5118]: E0121 00:11:17.220572 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 00:11:17.72055337 +0000 UTC m=+133.044800388 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tlb84" (UID: "0d503143-f75b-40e6-b0e3-d1bd595a05ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:17 crc kubenswrapper[5118]: I0121 00:11:17.240801 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-5r9pr"] Jan 21 00:11:17 crc kubenswrapper[5118]: I0121 00:11:17.243113 5118 generic.go:358] "Generic (PLEG): container finished" podID="4d8423b3-e68c-4083-859f-e89f705f28bd" containerID="30014b864d285400b34371d296e46835398d19823590d39198dffe3f73376d47" exitCode=0 Jan 21 00:11:17 crc kubenswrapper[5118]: I0121 00:11:17.243448 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-8t6fc" event={"ID":"4d8423b3-e68c-4083-859f-e89f705f28bd","Type":"ContainerDied","Data":"30014b864d285400b34371d296e46835398d19823590d39198dffe3f73376d47"} Jan 21 00:11:17 crc kubenswrapper[5118]: W0121 00:11:17.274761 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod97853351_9834_428a_b4b9_399da76c66be.slice/crio-e89b972653ed4162261b012ad69ef26867bc47441ca7de3c82904feb36b4fa76 WatchSource:0}: Error finding container e89b972653ed4162261b012ad69ef26867bc47441ca7de3c82904feb36b4fa76: Status 404 returned error can't find the container with id e89b972653ed4162261b012ad69ef26867bc47441ca7de3c82904feb36b4fa76 Jan 21 00:11:17 crc kubenswrapper[5118]: W0121 00:11:17.285488 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fa95ab2_7bd1_49ba_bbe4_31209e7b7a65.slice/crio-caab7d230667c66e4d05f1bedc628f637082b725aeb8f2224134c33566c18620 WatchSource:0}: Error finding container caab7d230667c66e4d05f1bedc628f637082b725aeb8f2224134c33566c18620: Status 404 returned error can't find the container with id caab7d230667c66e4d05f1bedc628f637082b725aeb8f2224134c33566c18620 Jan 21 00:11:17 crc kubenswrapper[5118]: I0121 00:11:17.290904 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-jcv4b"] Jan 21 00:11:17 crc kubenswrapper[5118]: I0121 00:11:17.308353 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-c5g8n" podStartSLOduration=111.307663693 podStartE2EDuration="1m51.307663693s" podCreationTimestamp="2026-01-21 00:09:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:11:17.29589343 +0000 UTC m=+132.620140448" watchObservedRunningTime="2026-01-21 00:11:17.307663693 +0000 UTC m=+132.631910711" Jan 21 00:11:17 crc kubenswrapper[5118]: I0121 00:11:17.312300 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-4vjlk"] Jan 21 00:11:17 crc kubenswrapper[5118]: I0121 00:11:17.319853 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-wkjhb"] Jan 21 00:11:17 crc kubenswrapper[5118]: I0121 00:11:17.321604 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 00:11:17 crc kubenswrapper[5118]: E0121 00:11:17.323405 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 00:11:17.82338575 +0000 UTC m=+133.147632768 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:17 crc kubenswrapper[5118]: I0121 00:11:17.325779 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tlb84\" (UID: \"0d503143-f75b-40e6-b0e3-d1bd595a05ae\") " pod="openshift-image-registry/image-registry-66587d64c8-tlb84" Jan 21 00:11:17 crc kubenswrapper[5118]: I0121 00:11:17.339930 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-74545575db-p65gs"] Jan 21 00:11:17 crc kubenswrapper[5118]: E0121 00:11:17.341381 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 00:11:17.841361908 +0000 UTC m=+133.165609006 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tlb84" (UID: "0d503143-f75b-40e6-b0e3-d1bd595a05ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:17 crc kubenswrapper[5118]: I0121 00:11:17.348341 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-2rnqz"] Jan 21 00:11:17 crc kubenswrapper[5118]: I0121 00:11:17.362030 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-8zvdp"] Jan 21 00:11:17 crc kubenswrapper[5118]: I0121 00:11:17.420751 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-747b44746d-pdh68" podStartSLOduration=111.420732975 podStartE2EDuration="1m51.420732975s" podCreationTimestamp="2026-01-21 00:09:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:11:17.38064419 +0000 UTC m=+132.704891208" watchObservedRunningTime="2026-01-21 00:11:17.420732975 +0000 UTC m=+132.744979993" Jan 21 00:11:17 crc kubenswrapper[5118]: I0121 00:11:17.421328 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-f7rf5" podStartSLOduration=6.42132067 podStartE2EDuration="6.42132067s" podCreationTimestamp="2026-01-21 00:11:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:11:17.419247575 +0000 UTC m=+132.743494583" watchObservedRunningTime="2026-01-21 00:11:17.42132067 +0000 UTC m=+132.745567688" Jan 21 00:11:17 crc kubenswrapper[5118]: I0121 00:11:17.434412 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 00:11:17 crc kubenswrapper[5118]: E0121 00:11:17.434736 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 00:11:17.934717016 +0000 UTC m=+133.258964034 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:17 crc kubenswrapper[5118]: W0121 00:11:17.439837 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod946b14ca_c6a8_4020_bdb0_01d5ca69b536.slice/crio-f5223cb0eeaa5e231a86ce4bb2382e7e944664a4ef932a7a6e8b462ffd33fa74 WatchSource:0}: Error finding container f5223cb0eeaa5e231a86ce4bb2382e7e944664a4ef932a7a6e8b462ffd33fa74: Status 404 returned error can't find the container with id f5223cb0eeaa5e231a86ce4bb2382e7e944664a4ef932a7a6e8b462ffd33fa74 Jan 21 00:11:17 crc kubenswrapper[5118]: I0121 00:11:17.457804 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-pruner-29482560-n7qwb" podStartSLOduration=111.457787359 podStartE2EDuration="1m51.457787359s" podCreationTimestamp="2026-01-21 00:09:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:11:17.455709303 +0000 UTC m=+132.779956321" watchObservedRunningTime="2026-01-21 00:11:17.457787359 +0000 UTC m=+132.782034377" Jan 21 00:11:17 crc kubenswrapper[5118]: I0121 00:11:17.534636 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-89cvg" podStartSLOduration=111.534621468 podStartE2EDuration="1m51.534621468s" podCreationTimestamp="2026-01-21 00:09:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:11:17.534188616 +0000 UTC m=+132.858435634" watchObservedRunningTime="2026-01-21 00:11:17.534621468 +0000 UTC m=+132.858868486" Jan 21 00:11:17 crc kubenswrapper[5118]: I0121 00:11:17.539606 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tlb84\" (UID: \"0d503143-f75b-40e6-b0e3-d1bd595a05ae\") " pod="openshift-image-registry/image-registry-66587d64c8-tlb84" Jan 21 00:11:17 crc kubenswrapper[5118]: E0121 00:11:17.539977 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 00:11:18.03996398 +0000 UTC m=+133.364210998 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tlb84" (UID: "0d503143-f75b-40e6-b0e3-d1bd595a05ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:17 crc kubenswrapper[5118]: I0121 00:11:17.574327 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-68cf44c8b8-jrk8q" Jan 21 00:11:17 crc kubenswrapper[5118]: I0121 00:11:17.588999 5118 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-jrk8q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 00:11:17 crc kubenswrapper[5118]: [-]has-synced failed: reason withheld Jan 21 00:11:17 crc kubenswrapper[5118]: [+]process-running ok Jan 21 00:11:17 crc kubenswrapper[5118]: healthz check failed Jan 21 00:11:17 crc kubenswrapper[5118]: I0121 00:11:17.589046 5118 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-jrk8q" podUID="aa4fb550-3d2a-4a19-8b93-c5e54e9b897a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 00:11:17 crc kubenswrapper[5118]: I0121 00:11:17.640951 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 00:11:17 crc kubenswrapper[5118]: E0121 00:11:17.641540 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 00:11:18.141524626 +0000 UTC m=+133.465771634 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:17 crc kubenswrapper[5118]: I0121 00:11:17.664413 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-9ddfb9f55-7lpxz" podStartSLOduration=111.664392983 podStartE2EDuration="1m51.664392983s" podCreationTimestamp="2026-01-21 00:09:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:11:17.6217136 +0000 UTC m=+132.945960618" watchObservedRunningTime="2026-01-21 00:11:17.664392983 +0000 UTC m=+132.988640011" Jan 21 00:11:17 crc kubenswrapper[5118]: I0121 00:11:17.665908 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-68cf44c8b8-jrk8q" podStartSLOduration=111.665900873 podStartE2EDuration="1m51.665900873s" podCreationTimestamp="2026-01-21 00:09:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:11:17.663644293 +0000 UTC m=+132.987891301" watchObservedRunningTime="2026-01-21 00:11:17.665900873 +0000 UTC m=+132.990147891" Jan 21 00:11:17 crc kubenswrapper[5118]: I0121 00:11:17.694529 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-6d476" podStartSLOduration=111.694507203 podStartE2EDuration="1m51.694507203s" podCreationTimestamp="2026-01-21 00:09:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:11:17.693830875 +0000 UTC m=+133.018077893" watchObservedRunningTime="2026-01-21 00:11:17.694507203 +0000 UTC m=+133.018754231" Jan 21 00:11:17 crc kubenswrapper[5118]: I0121 00:11:17.744811 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tlb84\" (UID: \"0d503143-f75b-40e6-b0e3-d1bd595a05ae\") " pod="openshift-image-registry/image-registry-66587d64c8-tlb84" Jan 21 00:11:17 crc kubenswrapper[5118]: E0121 00:11:17.745549 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 00:11:18.245533498 +0000 UTC m=+133.569780516 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tlb84" (UID: "0d503143-f75b-40e6-b0e3-d1bd595a05ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:17 crc kubenswrapper[5118]: I0121 00:11:17.848775 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 00:11:17 crc kubenswrapper[5118]: E0121 00:11:17.849030 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 00:11:18.349015588 +0000 UTC m=+133.673262606 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:17 crc kubenswrapper[5118]: I0121 00:11:17.950501 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tlb84\" (UID: \"0d503143-f75b-40e6-b0e3-d1bd595a05ae\") " pod="openshift-image-registry/image-registry-66587d64c8-tlb84" Jan 21 00:11:17 crc kubenswrapper[5118]: E0121 00:11:17.951293 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 00:11:18.451280362 +0000 UTC m=+133.775527380 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tlb84" (UID: "0d503143-f75b-40e6-b0e3-d1bd595a05ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:18 crc kubenswrapper[5118]: I0121 00:11:18.051799 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 00:11:18 crc kubenswrapper[5118]: E0121 00:11:18.052465 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 00:11:18.552443808 +0000 UTC m=+133.876690826 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:18 crc kubenswrapper[5118]: I0121 00:11:18.155444 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tlb84\" (UID: \"0d503143-f75b-40e6-b0e3-d1bd595a05ae\") " pod="openshift-image-registry/image-registry-66587d64c8-tlb84" Jan 21 00:11:18 crc kubenswrapper[5118]: E0121 00:11:18.156267 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 00:11:18.656251462 +0000 UTC m=+133.980498480 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tlb84" (UID: "0d503143-f75b-40e6-b0e3-d1bd595a05ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:18 crc kubenswrapper[5118]: I0121 00:11:18.238538 5118 ???:1] "http: TLS handshake error from 192.168.126.11:56930: no serving certificate available for the kubelet" Jan 21 00:11:18 crc kubenswrapper[5118]: I0121 00:11:18.284867 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 00:11:18 crc kubenswrapper[5118]: E0121 00:11:18.284992 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 00:11:18.784967737 +0000 UTC m=+134.109214755 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:18 crc kubenswrapper[5118]: I0121 00:11:18.285083 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tlb84\" (UID: \"0d503143-f75b-40e6-b0e3-d1bd595a05ae\") " pod="openshift-image-registry/image-registry-66587d64c8-tlb84" Jan 21 00:11:18 crc kubenswrapper[5118]: E0121 00:11:18.285489 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 00:11:18.78548253 +0000 UTC m=+134.109729538 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tlb84" (UID: "0d503143-f75b-40e6-b0e3-d1bd595a05ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:18 crc kubenswrapper[5118]: I0121 00:11:18.318538 5118 ???:1] "http: TLS handshake error from 192.168.126.11:56932: no serving certificate available for the kubelet" Jan 21 00:11:18 crc kubenswrapper[5118]: I0121 00:11:18.355453 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-5r9pr" event={"ID":"a651b3d8-5a1f-4da5-8d11-e6e3b1ef5d00","Type":"ContainerStarted","Data":"fb4021bffabe881856ddef4066b589339185f407ed9fb12652b83b4cbb0717c3"} Jan 21 00:11:18 crc kubenswrapper[5118]: I0121 00:11:18.358353 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-5wlqh" event={"ID":"c9bfbcd7-e344-4287-b4ac-aaf5f9a446e9","Type":"ContainerStarted","Data":"a635c30022ca6caf2a5d01b08a596578bdf75cb47227d573d16382bafbd63d45"} Jan 21 00:11:18 crc kubenswrapper[5118]: I0121 00:11:18.370363 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-g77tt" event={"ID":"26d1a4fa-1469-4128-bd56-c9a122b28068","Type":"ContainerStarted","Data":"1e6801320874cfa8e7840e091821e30de2bb270acdcb6fbda9f5cea2ffc991e6"} Jan 21 00:11:18 crc kubenswrapper[5118]: I0121 00:11:18.370597 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-multus/cni-sysctl-allowlist-ds-g77tt" Jan 21 00:11:18 crc kubenswrapper[5118]: I0121 00:11:18.391741 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 00:11:18 crc kubenswrapper[5118]: E0121 00:11:18.392471 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 00:11:18.892442173 +0000 UTC m=+134.216689191 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:18 crc kubenswrapper[5118]: I0121 00:11:18.453028 5118 ???:1] "http: TLS handshake error from 192.168.126.11:56934: no serving certificate available for the kubelet" Jan 21 00:11:18 crc kubenswrapper[5118]: I0121 00:11:18.492941 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tlb84\" (UID: \"0d503143-f75b-40e6-b0e3-d1bd595a05ae\") " pod="openshift-image-registry/image-registry-66587d64c8-tlb84" Jan 21 00:11:18 crc kubenswrapper[5118]: E0121 00:11:18.493448 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 00:11:18.993430635 +0000 UTC m=+134.317677653 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tlb84" (UID: "0d503143-f75b-40e6-b0e3-d1bd595a05ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:18 crc kubenswrapper[5118]: I0121 00:11:18.507062 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-g77tt" Jan 21 00:11:18 crc kubenswrapper[5118]: I0121 00:11:18.525679 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-w2r2c" event={"ID":"940d93fe-9ecf-4274-9caf-6123a0ce203c","Type":"ContainerStarted","Data":"74b2da73798ad0651482426ec966989df79196846d14edff939caed3dd84fd8c"} Jan 21 00:11:18 crc kubenswrapper[5118]: I0121 00:11:18.525713 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-w2r2c" event={"ID":"940d93fe-9ecf-4274-9caf-6123a0ce203c","Type":"ContainerStarted","Data":"9a8d768c9d40b8e5f35f1c036b63f3b797f9bcc1fa7b794e96779d4d100f39c8"} Jan 21 00:11:18 crc kubenswrapper[5118]: I0121 00:11:18.566718 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d44f6ddf-xbtg4" event={"ID":"6f6f802d-add4-4e9d-bbbe-d3fd2eefd62a","Type":"ContainerStarted","Data":"99e90130602012a5e64d4397c5ca9c93aeabe91f4de68f8d9fb4faebb6822c5a"} Jan 21 00:11:18 crc kubenswrapper[5118]: I0121 00:11:18.566780 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d44f6ddf-xbtg4" event={"ID":"6f6f802d-add4-4e9d-bbbe-d3fd2eefd62a","Type":"ContainerStarted","Data":"8480c12562884b8ffa10ab78f49de8d95ad633180c2156c0f81c07e9b1bbf025"} Jan 21 00:11:18 crc kubenswrapper[5118]: I0121 00:11:18.584601 5118 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-jrk8q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 00:11:18 crc kubenswrapper[5118]: [-]has-synced failed: reason withheld Jan 21 00:11:18 crc kubenswrapper[5118]: [+]process-running ok Jan 21 00:11:18 crc kubenswrapper[5118]: healthz check failed Jan 21 00:11:18 crc kubenswrapper[5118]: I0121 00:11:18.584673 5118 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-jrk8q" podUID="aa4fb550-3d2a-4a19-8b93-c5e54e9b897a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 00:11:18 crc kubenswrapper[5118]: I0121 00:11:18.596095 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 00:11:18 crc kubenswrapper[5118]: I0121 00:11:18.597590 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-nkx6r" event={"ID":"271e0654-9d86-4ec1-8c25-d345a8a1eb0a","Type":"ContainerStarted","Data":"d301664a3de27335133a3221ea693caa7e6cd951d5c9ceff1ca7d46361ab8763"} Jan 21 00:11:18 crc kubenswrapper[5118]: I0121 00:11:18.597635 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-nkx6r" event={"ID":"271e0654-9d86-4ec1-8c25-d345a8a1eb0a","Type":"ContainerStarted","Data":"f247452d225efb37f7acb24f4f5745ad02a9b7770505f0ec134b9a5f4c8ec2c7"} Jan 21 00:11:18 crc kubenswrapper[5118]: E0121 00:11:18.597696 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 00:11:19.097665949 +0000 UTC m=+134.421912967 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:18 crc kubenswrapper[5118]: I0121 00:11:18.615800 5118 ???:1] "http: TLS handshake error from 192.168.126.11:56950: no serving certificate available for the kubelet" Jan 21 00:11:18 crc kubenswrapper[5118]: I0121 00:11:18.636203 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-tgjx7" event={"ID":"d042745d-98a0-44c8-ac92-7704d8b43b84","Type":"ContainerStarted","Data":"f0418add80837db4cacf73394bfc6f1aca65f29470d04c10837955c6872121df"} Jan 21 00:11:18 crc kubenswrapper[5118]: I0121 00:11:18.647357 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-bg6nr" event={"ID":"a5be2081-6a80-4521-a1dd-2e332352f29c","Type":"ContainerStarted","Data":"62d2206f57fe49325fd1c84989c0f0d7ea91240535c5f815e502e8fe9e886fb1"} Jan 21 00:11:18 crc kubenswrapper[5118]: I0121 00:11:18.647397 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-bg6nr" event={"ID":"a5be2081-6a80-4521-a1dd-2e332352f29c","Type":"ContainerStarted","Data":"7de562d55ed87f391140967210497948ee45aa70f52c101ffdb4178e61fa5bff"} Jan 21 00:11:18 crc kubenswrapper[5118]: I0121 00:11:18.656491 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-64d44f6ddf-xbtg4" podStartSLOduration=112.65644498 podStartE2EDuration="1m52.65644498s" podCreationTimestamp="2026-01-21 00:09:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:11:18.653810633 +0000 UTC m=+133.978057671" watchObservedRunningTime="2026-01-21 00:11:18.65644498 +0000 UTC m=+133.980691998" Jan 21 00:11:18 crc kubenswrapper[5118]: I0121 00:11:18.684856 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-w2r2c" podStartSLOduration=112.68484128 podStartE2EDuration="1m52.68484128s" podCreationTimestamp="2026-01-21 00:09:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:11:18.681217028 +0000 UTC m=+134.005464056" watchObservedRunningTime="2026-01-21 00:11:18.68484128 +0000 UTC m=+134.009088298" Jan 21 00:11:18 crc kubenswrapper[5118]: I0121 00:11:18.697934 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tlb84\" (UID: \"0d503143-f75b-40e6-b0e3-d1bd595a05ae\") " pod="openshift-image-registry/image-registry-66587d64c8-tlb84" Jan 21 00:11:18 crc kubenswrapper[5118]: E0121 00:11:18.698288 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 00:11:19.198273881 +0000 UTC m=+134.522520899 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tlb84" (UID: "0d503143-f75b-40e6-b0e3-d1bd595a05ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:18 crc kubenswrapper[5118]: I0121 00:11:18.708945 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-cnnpj" event={"ID":"609947aa-6e7c-439b-a7a7-7d06f8ab4f1c","Type":"ContainerStarted","Data":"ccf06415258a021c4b8898a61f865766e89e20031c83692e0f84c75d2b878112"} Jan 21 00:11:18 crc kubenswrapper[5118]: I0121 00:11:18.727651 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-g77tt" podStartSLOduration=7.727634356 podStartE2EDuration="7.727634356s" podCreationTimestamp="2026-01-21 00:11:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:11:18.70296864 +0000 UTC m=+134.027215678" watchObservedRunningTime="2026-01-21 00:11:18.727634356 +0000 UTC m=+134.051881374" Jan 21 00:11:18 crc kubenswrapper[5118]: I0121 00:11:18.728276 5118 ???:1] "http: TLS handshake error from 192.168.126.11:56960: no serving certificate available for the kubelet" Jan 21 00:11:18 crc kubenswrapper[5118]: I0121 00:11:18.729640 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-nkx6r" podStartSLOduration=112.729626466 podStartE2EDuration="1m52.729626466s" podCreationTimestamp="2026-01-21 00:09:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:11:18.726553218 +0000 UTC m=+134.050800236" watchObservedRunningTime="2026-01-21 00:11:18.729626466 +0000 UTC m=+134.053873484" Jan 21 00:11:18 crc kubenswrapper[5118]: I0121 00:11:18.750960 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-54c688565-cnnpj" podStartSLOduration=112.750925977 podStartE2EDuration="1m52.750925977s" podCreationTimestamp="2026-01-21 00:09:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:11:18.749062359 +0000 UTC m=+134.073309377" watchObservedRunningTime="2026-01-21 00:11:18.750925977 +0000 UTC m=+134.075172995" Jan 21 00:11:18 crc kubenswrapper[5118]: I0121 00:11:18.771927 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-bg6nr" podStartSLOduration=112.771909049 podStartE2EDuration="1m52.771909049s" podCreationTimestamp="2026-01-21 00:09:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:11:18.770793491 +0000 UTC m=+134.095040509" watchObservedRunningTime="2026-01-21 00:11:18.771909049 +0000 UTC m=+134.096156067" Jan 21 00:11:18 crc kubenswrapper[5118]: I0121 00:11:18.802066 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 00:11:18 crc kubenswrapper[5118]: E0121 00:11:18.803050 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 00:11:19.303034268 +0000 UTC m=+134.627281286 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:18 crc kubenswrapper[5118]: I0121 00:11:18.827967 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-67c89758df-5ds28" event={"ID":"ba04635f-4c5f-4669-af58-97627beae1b2","Type":"ContainerStarted","Data":"b08a8c1e6604ad4a833fd8721354dc286b67aff631210632b5ccc65cb6e89eda"} Jan 21 00:11:18 crc kubenswrapper[5118]: I0121 00:11:18.830624 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console-operator/console-operator-67c89758df-5ds28" Jan 21 00:11:18 crc kubenswrapper[5118]: I0121 00:11:18.847520 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-apiserver/apiserver-9ddfb9f55-7lpxz" Jan 21 00:11:18 crc kubenswrapper[5118]: I0121 00:11:18.847556 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-9ddfb9f55-7lpxz" Jan 21 00:11:18 crc kubenswrapper[5118]: I0121 00:11:18.849321 5118 ???:1] "http: TLS handshake error from 192.168.126.11:56970: no serving certificate available for the kubelet" Jan 21 00:11:18 crc kubenswrapper[5118]: I0121 00:11:18.855667 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-tgjx7" podStartSLOduration=112.855650473 podStartE2EDuration="1m52.855650473s" podCreationTimestamp="2026-01-21 00:09:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:11:18.827061768 +0000 UTC m=+134.151308786" watchObservedRunningTime="2026-01-21 00:11:18.855650473 +0000 UTC m=+134.179897491" Jan 21 00:11:18 crc kubenswrapper[5118]: I0121 00:11:18.875289 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-s4czg" event={"ID":"f5911eba-6406-44b4-868f-a47787c95fdf","Type":"ContainerStarted","Data":"d13bb44c29a723b63936b7f1f5cd1fc94798016c0aa784a486431999bf91e68c"} Jan 21 00:11:18 crc kubenswrapper[5118]: I0121 00:11:18.875339 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-s4czg" event={"ID":"f5911eba-6406-44b4-868f-a47787c95fdf","Type":"ContainerStarted","Data":"7f8a4667128219f7ca81c58fb1a64578f02208b80f767620dd38de16e00904f4"} Jan 21 00:11:18 crc kubenswrapper[5118]: I0121 00:11:18.891444 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-9ddfb9f55-7lpxz" Jan 21 00:11:18 crc kubenswrapper[5118]: I0121 00:11:18.905453 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tlb84\" (UID: \"0d503143-f75b-40e6-b0e3-d1bd595a05ae\") " pod="openshift-image-registry/image-registry-66587d64c8-tlb84" Jan 21 00:11:18 crc kubenswrapper[5118]: E0121 00:11:18.907270 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 00:11:19.407253872 +0000 UTC m=+134.731500890 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tlb84" (UID: "0d503143-f75b-40e6-b0e3-d1bd595a05ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:18 crc kubenswrapper[5118]: I0121 00:11:18.913500 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-f7rf5" event={"ID":"59298912-50d6-49ab-82d9-625a7df65661","Type":"ContainerStarted","Data":"1248ba95ef224b064afd890bbca420bab2bf9a9355c08a5e181f2622ffaf6ed8"} Jan 21 00:11:18 crc kubenswrapper[5118]: I0121 00:11:18.931110 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-67c89758df-5ds28" podStartSLOduration=112.931089677 podStartE2EDuration="1m52.931089677s" podCreationTimestamp="2026-01-21 00:09:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:11:18.856282219 +0000 UTC m=+134.180529237" watchObservedRunningTime="2026-01-21 00:11:18.931089677 +0000 UTC m=+134.255336695" Jan 21 00:11:18 crc kubenswrapper[5118]: I0121 00:11:18.949108 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-c8s6j" event={"ID":"65be3f94-f1d5-4ebb-933f-216e1650f309","Type":"ContainerStarted","Data":"74bcddfbb7face577e0cc3d1204959a9aa04a1809802d37cb0e35e750dab49d8"} Jan 21 00:11:18 crc kubenswrapper[5118]: I0121 00:11:18.965250 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-jcv4b" event={"ID":"cd261347-6a59-453c-836f-31c195e37417","Type":"ContainerStarted","Data":"0c30d5e73170b07e4f201f38d39bd16e358964968cf7e76006c8e0faf58ba8c0"} Jan 21 00:11:18 crc kubenswrapper[5118]: I0121 00:11:18.967088 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-dnqp8" event={"ID":"97853351-9834-428a-b4b9-399da76c66be","Type":"ContainerStarted","Data":"04f675cbf8823c0af2b56ae36fd738fd6f478437cf8254f9b07eeec90e0c875e"} Jan 21 00:11:18 crc kubenswrapper[5118]: I0121 00:11:18.967111 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-dnqp8" event={"ID":"97853351-9834-428a-b4b9-399da76c66be","Type":"ContainerStarted","Data":"e89b972653ed4162261b012ad69ef26867bc47441ca7de3c82904feb36b4fa76"} Jan 21 00:11:18 crc kubenswrapper[5118]: I0121 00:11:18.967939 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-dnqp8" Jan 21 00:11:18 crc kubenswrapper[5118]: I0121 00:11:18.970587 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-p7jcp" event={"ID":"dff88ce7-473b-4e36-ae15-98b61242704c","Type":"ContainerStarted","Data":"281ba315639551c868f8da1746c902423a090141c6f32926f58a97b2cb0b27b1"} Jan 21 00:11:18 crc kubenswrapper[5118]: I0121 00:11:18.970611 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-p7jcp" event={"ID":"dff88ce7-473b-4e36-ae15-98b61242704c","Type":"ContainerStarted","Data":"3bc28e1eaf308e9bcbc3940124aa373a14c45a0a230733a9fab3487eab10f215"} Jan 21 00:11:18 crc kubenswrapper[5118]: I0121 00:11:18.971291 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-p7jcp" Jan 21 00:11:18 crc kubenswrapper[5118]: I0121 00:11:18.990668 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-c8s6j" podStartSLOduration=112.990653498 podStartE2EDuration="1m52.990653498s" podCreationTimestamp="2026-01-21 00:09:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:11:18.988569065 +0000 UTC m=+134.312816083" watchObservedRunningTime="2026-01-21 00:11:18.990653498 +0000 UTC m=+134.314900516" Jan 21 00:11:18 crc kubenswrapper[5118]: I0121 00:11:18.991367 5118 generic.go:358] "Generic (PLEG): container finished" podID="6c9c3fd8-7f40-4eb4-9a40-6e13fec6579f" containerID="fe57342753f677ddc1e6375f9c5aaf2b6a0c39459e684f32846b6f09c5b3df2f" exitCode=0 Jan 21 00:11:19 crc kubenswrapper[5118]: I0121 00:11:19.006618 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 00:11:19 crc kubenswrapper[5118]: E0121 00:11:19.007586 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 00:11:19.507570257 +0000 UTC m=+134.831817275 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:19 crc kubenswrapper[5118]: I0121 00:11:19.024428 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-jdqmz" event={"ID":"6c9c3fd8-7f40-4eb4-9a40-6e13fec6579f","Type":"ContainerDied","Data":"fe57342753f677ddc1e6375f9c5aaf2b6a0c39459e684f32846b6f09c5b3df2f"} Jan 21 00:11:19 crc kubenswrapper[5118]: I0121 00:11:19.024727 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-dnqp8" Jan 21 00:11:19 crc kubenswrapper[5118]: I0121 00:11:19.024862 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-p7jcp" Jan 21 00:11:19 crc kubenswrapper[5118]: I0121 00:11:19.024981 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-jnbtq" event={"ID":"39000d94-11b8-42ff-a127-2136d0f2cc0b","Type":"ContainerStarted","Data":"cce660578ea9cff3a4bf48b578a8eec834ece283b294662d1b365331a2de76b1"} Jan 21 00:11:19 crc kubenswrapper[5118]: I0121 00:11:19.027235 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-8zvdp" event={"ID":"a6c1fff3-93b2-4ea5-b74f-8f7715a04e5c","Type":"ContainerStarted","Data":"e0d93222a4e1db870cb35e9875b365b9bcc8b07fa7e2bbbb8cfcfde07cf1c9f6"} Jan 21 00:11:19 crc kubenswrapper[5118]: I0121 00:11:19.028754 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-g77tt"] Jan 21 00:11:19 crc kubenswrapper[5118]: I0121 00:11:19.029249 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-dnqp8" podStartSLOduration=113.029233136 podStartE2EDuration="1m53.029233136s" podCreationTimestamp="2026-01-21 00:09:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:11:19.02502019 +0000 UTC m=+134.349267208" watchObservedRunningTime="2026-01-21 00:11:19.029233136 +0000 UTC m=+134.353480144" Jan 21 00:11:19 crc kubenswrapper[5118]: I0121 00:11:19.040997 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-4vjlk" event={"ID":"946b14ca-c6a8-4020-bdb0-01d5ca69b536","Type":"ContainerStarted","Data":"f5223cb0eeaa5e231a86ce4bb2382e7e944664a4ef932a7a6e8b462ffd33fa74"} Jan 21 00:11:19 crc kubenswrapper[5118]: I0121 00:11:19.071544 5118 ???:1] "http: TLS handshake error from 192.168.126.11:56976: no serving certificate available for the kubelet" Jan 21 00:11:19 crc kubenswrapper[5118]: I0121 00:11:19.075764 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-p7jcp" podStartSLOduration=113.075743076 podStartE2EDuration="1m53.075743076s" podCreationTimestamp="2026-01-21 00:09:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:11:19.059671569 +0000 UTC m=+134.383918597" watchObservedRunningTime="2026-01-21 00:11:19.075743076 +0000 UTC m=+134.399990094" Jan 21 00:11:19 crc kubenswrapper[5118]: I0121 00:11:19.103479 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-9zfdd" event={"ID":"9c52d3a9-6263-4edb-9071-1d5dc43c7197","Type":"ContainerStarted","Data":"d4cf678aca609401391f1bcfdf2f21e8076b196007df48bfe5d3e0e85716ad40"} Jan 21 00:11:19 crc kubenswrapper[5118]: I0121 00:11:19.115439 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tlb84\" (UID: \"0d503143-f75b-40e6-b0e3-d1bd595a05ae\") " pod="openshift-image-registry/image-registry-66587d64c8-tlb84" Jan 21 00:11:19 crc kubenswrapper[5118]: E0121 00:11:19.115843 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 00:11:19.615831293 +0000 UTC m=+134.940078311 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tlb84" (UID: "0d503143-f75b-40e6-b0e3-d1bd595a05ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:19 crc kubenswrapper[5118]: I0121 00:11:19.116859 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-4vjlk" podStartSLOduration=8.116842259 podStartE2EDuration="8.116842259s" podCreationTimestamp="2026-01-21 00:11:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:11:19.114657733 +0000 UTC m=+134.438904751" watchObservedRunningTime="2026-01-21 00:11:19.116842259 +0000 UTC m=+134.441089277" Jan 21 00:11:19 crc kubenswrapper[5118]: I0121 00:11:19.146640 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-2rnqz" event={"ID":"11f14664-de3b-4cae-af94-5367cc3f2f4b","Type":"ContainerStarted","Data":"81f729e9a47e479b51c09788613dcef6f86d189bf82cccfa573e12a03fbfbd08"} Jan 21 00:11:19 crc kubenswrapper[5118]: I0121 00:11:19.163371 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482560-6bcnb" event={"ID":"7c693d48-122b-44a7-8257-f4f312e980aa","Type":"ContainerStarted","Data":"2baf03122af0e60a3556f963c6ab1e5d2f09f592ff116073005a79888cd27156"} Jan 21 00:11:19 crc kubenswrapper[5118]: I0121 00:11:19.163593 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482560-6bcnb" event={"ID":"7c693d48-122b-44a7-8257-f4f312e980aa","Type":"ContainerStarted","Data":"87661eab5887c8dca9b9510d1ccb9739c635e2b63f448e500244abe8879d4ed8"} Jan 21 00:11:19 crc kubenswrapper[5118]: I0121 00:11:19.223695 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 00:11:19 crc kubenswrapper[5118]: E0121 00:11:19.252094 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 00:11:19.752069299 +0000 UTC m=+135.076316307 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:19 crc kubenswrapper[5118]: I0121 00:11:19.254316 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-74545575db-p65gs" event={"ID":"8ec6d719-5d37-4de1-9afe-3e01bfe8d640","Type":"ContainerStarted","Data":"9f58caacb322abe418bd48e6c6cf6aec7297c3159b2a22d4687b693818d06f07"} Jan 21 00:11:19 crc kubenswrapper[5118]: I0121 00:11:19.284706 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-9zfdd" podStartSLOduration=113.284689467 podStartE2EDuration="1m53.284689467s" podCreationTimestamp="2026-01-21 00:09:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:11:19.221534064 +0000 UTC m=+134.545781082" watchObservedRunningTime="2026-01-21 00:11:19.284689467 +0000 UTC m=+134.608936485" Jan 21 00:11:19 crc kubenswrapper[5118]: I0121 00:11:19.287430 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29482560-6bcnb" podStartSLOduration=113.287409636 podStartE2EDuration="1m53.287409636s" podCreationTimestamp="2026-01-21 00:09:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:11:19.26470126 +0000 UTC m=+134.588948278" watchObservedRunningTime="2026-01-21 00:11:19.287409636 +0000 UTC m=+134.611656674" Jan 21 00:11:19 crc kubenswrapper[5118]: I0121 00:11:19.300650 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-qq2q6" event={"ID":"5fa95ab2-7bd1-49ba-bbe4-31209e7b7a65","Type":"ContainerStarted","Data":"caab7d230667c66e4d05f1bedc628f637082b725aeb8f2224134c33566c18620"} Jan 21 00:11:19 crc kubenswrapper[5118]: I0121 00:11:19.328489 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-69b85846b6-cfnkd" event={"ID":"40b0f866-cb01-4820-863e-91d46a2fdda1","Type":"ContainerStarted","Data":"eedd9474b3d120d7a2e2397f2bc1f1a4128312e1e2f67b465b180005928828ce"} Jan 21 00:11:19 crc kubenswrapper[5118]: I0121 00:11:19.331241 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tlb84\" (UID: \"0d503143-f75b-40e6-b0e3-d1bd595a05ae\") " pod="openshift-image-registry/image-registry-66587d64c8-tlb84" Jan 21 00:11:19 crc kubenswrapper[5118]: E0121 00:11:19.331584 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 00:11:19.831572116 +0000 UTC m=+135.155819124 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tlb84" (UID: "0d503143-f75b-40e6-b0e3-d1bd595a05ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:19 crc kubenswrapper[5118]: I0121 00:11:19.335327 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-wkjhb" event={"ID":"485b5bf0-70af-4e4a-b766-d9e63a94395f","Type":"ContainerStarted","Data":"1e56d6d05d0417ba9d6a0762fa3b2d9d7cfa081b02caa56104710556d5e9eb13"} Jan 21 00:11:19 crc kubenswrapper[5118]: I0121 00:11:19.343487 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-74545575db-p65gs" podStartSLOduration=113.343471118 podStartE2EDuration="1m53.343471118s" podCreationTimestamp="2026-01-21 00:09:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:11:19.301747749 +0000 UTC m=+134.625994767" watchObservedRunningTime="2026-01-21 00:11:19.343471118 +0000 UTC m=+134.667718136" Jan 21 00:11:19 crc kubenswrapper[5118]: I0121 00:11:19.344721 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-qq2q6" podStartSLOduration=113.344712039 podStartE2EDuration="1m53.344712039s" podCreationTimestamp="2026-01-21 00:09:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:11:19.342819081 +0000 UTC m=+134.667066099" watchObservedRunningTime="2026-01-21 00:11:19.344712039 +0000 UTC m=+134.668959077" Jan 21 00:11:19 crc kubenswrapper[5118]: I0121 00:11:19.358422 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-bnqhm" event={"ID":"3ba5de99-4b50-4027-b9c6-f1fbb61a7146","Type":"ContainerStarted","Data":"214185244e45dd7366d20e681c599a2c30067ac526e96e70cc200b1940a6af7c"} Jan 21 00:11:19 crc kubenswrapper[5118]: I0121 00:11:19.358457 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-bnqhm" event={"ID":"3ba5de99-4b50-4027-b9c6-f1fbb61a7146","Type":"ContainerStarted","Data":"1f4718847a9d14a570314bc10aaba398f18be652039d2f34d0ae46bdc8b6aad4"} Jan 21 00:11:19 crc kubenswrapper[5118]: I0121 00:11:19.358470 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-bnqhm" Jan 21 00:11:19 crc kubenswrapper[5118]: I0121 00:11:19.360421 5118 patch_prober.go:28] interesting pod/downloads-747b44746d-pdh68 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.31:8080/\": dial tcp 10.217.0.31:8080: connect: connection refused" start-of-body= Jan 21 00:11:19 crc kubenswrapper[5118]: I0121 00:11:19.360491 5118 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-pdh68" podUID="f2431df6-6390-4fb8-b13e-56750ad2fed4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.31:8080/\": dial tcp 10.217.0.31:8080: connect: connection refused" Jan 21 00:11:19 crc kubenswrapper[5118]: I0121 00:11:19.382856 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-9ddfb9f55-7lpxz" Jan 21 00:11:19 crc kubenswrapper[5118]: I0121 00:11:19.384045 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-69b85846b6-cfnkd" podStartSLOduration=113.384026636 podStartE2EDuration="1m53.384026636s" podCreationTimestamp="2026-01-21 00:09:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:11:19.381384599 +0000 UTC m=+134.705631617" watchObservedRunningTime="2026-01-21 00:11:19.384026636 +0000 UTC m=+134.708273654" Jan 21 00:11:19 crc kubenswrapper[5118]: I0121 00:11:19.432039 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 00:11:19 crc kubenswrapper[5118]: E0121 00:11:19.434297 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 00:11:19.934275651 +0000 UTC m=+135.258522669 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:19 crc kubenswrapper[5118]: I0121 00:11:19.446008 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-bnqhm" podStartSLOduration=113.445993478 podStartE2EDuration="1m53.445993478s" podCreationTimestamp="2026-01-21 00:09:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:11:19.444658914 +0000 UTC m=+134.768905922" watchObservedRunningTime="2026-01-21 00:11:19.445993478 +0000 UTC m=+134.770240496" Jan 21 00:11:19 crc kubenswrapper[5118]: I0121 00:11:19.467518 5118 ???:1] "http: TLS handshake error from 192.168.126.11:56992: no serving certificate available for the kubelet" Jan 21 00:11:19 crc kubenswrapper[5118]: I0121 00:11:19.539407 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tlb84\" (UID: \"0d503143-f75b-40e6-b0e3-d1bd595a05ae\") " pod="openshift-image-registry/image-registry-66587d64c8-tlb84" Jan 21 00:11:19 crc kubenswrapper[5118]: E0121 00:11:19.540026 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 00:11:20.040012013 +0000 UTC m=+135.364259031 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tlb84" (UID: "0d503143-f75b-40e6-b0e3-d1bd595a05ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:19 crc kubenswrapper[5118]: I0121 00:11:19.545464 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-67c89758df-5ds28" Jan 21 00:11:19 crc kubenswrapper[5118]: I0121 00:11:19.583346 5118 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-jrk8q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 00:11:19 crc kubenswrapper[5118]: [-]has-synced failed: reason withheld Jan 21 00:11:19 crc kubenswrapper[5118]: [+]process-running ok Jan 21 00:11:19 crc kubenswrapper[5118]: healthz check failed Jan 21 00:11:19 crc kubenswrapper[5118]: I0121 00:11:19.583402 5118 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-jrk8q" podUID="aa4fb550-3d2a-4a19-8b93-c5e54e9b897a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 00:11:19 crc kubenswrapper[5118]: I0121 00:11:19.642786 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 00:11:19 crc kubenswrapper[5118]: E0121 00:11:19.643120 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 00:11:20.143104978 +0000 UTC m=+135.467351996 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:19 crc kubenswrapper[5118]: I0121 00:11:19.748547 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tlb84\" (UID: \"0d503143-f75b-40e6-b0e3-d1bd595a05ae\") " pod="openshift-image-registry/image-registry-66587d64c8-tlb84" Jan 21 00:11:19 crc kubenswrapper[5118]: E0121 00:11:19.748826 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 00:11:20.24881326 +0000 UTC m=+135.573060278 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tlb84" (UID: "0d503143-f75b-40e6-b0e3-d1bd595a05ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:19 crc kubenswrapper[5118]: I0121 00:11:19.850069 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 00:11:19 crc kubenswrapper[5118]: E0121 00:11:19.850270 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 00:11:20.350243603 +0000 UTC m=+135.674490611 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:19 crc kubenswrapper[5118]: I0121 00:11:19.850602 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tlb84\" (UID: \"0d503143-f75b-40e6-b0e3-d1bd595a05ae\") " pod="openshift-image-registry/image-registry-66587d64c8-tlb84" Jan 21 00:11:19 crc kubenswrapper[5118]: E0121 00:11:19.850914 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 00:11:20.35090325 +0000 UTC m=+135.675150268 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tlb84" (UID: "0d503143-f75b-40e6-b0e3-d1bd595a05ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:19 crc kubenswrapper[5118]: I0121 00:11:19.951437 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 00:11:19 crc kubenswrapper[5118]: E0121 00:11:19.951790 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 00:11:20.451762748 +0000 UTC m=+135.776009766 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:20 crc kubenswrapper[5118]: I0121 00:11:20.054576 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tlb84\" (UID: \"0d503143-f75b-40e6-b0e3-d1bd595a05ae\") " pod="openshift-image-registry/image-registry-66587d64c8-tlb84" Jan 21 00:11:20 crc kubenswrapper[5118]: E0121 00:11:20.054948 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 00:11:20.554933745 +0000 UTC m=+135.879180763 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tlb84" (UID: "0d503143-f75b-40e6-b0e3-d1bd595a05ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:20 crc kubenswrapper[5118]: I0121 00:11:20.155748 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 00:11:20 crc kubenswrapper[5118]: E0121 00:11:20.155858 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 00:11:20.655838415 +0000 UTC m=+135.980085443 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:20 crc kubenswrapper[5118]: I0121 00:11:20.156204 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tlb84\" (UID: \"0d503143-f75b-40e6-b0e3-d1bd595a05ae\") " pod="openshift-image-registry/image-registry-66587d64c8-tlb84" Jan 21 00:11:20 crc kubenswrapper[5118]: E0121 00:11:20.156489 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 00:11:20.656480151 +0000 UTC m=+135.980727169 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tlb84" (UID: "0d503143-f75b-40e6-b0e3-d1bd595a05ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:20 crc kubenswrapper[5118]: I0121 00:11:20.257801 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 00:11:20 crc kubenswrapper[5118]: E0121 00:11:20.257977 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 00:11:20.757950965 +0000 UTC m=+136.082197973 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:20 crc kubenswrapper[5118]: I0121 00:11:20.275645 5118 ???:1] "http: TLS handshake error from 192.168.126.11:56996: no serving certificate available for the kubelet" Jan 21 00:11:20 crc kubenswrapper[5118]: I0121 00:11:20.358269 5118 patch_prober.go:28] interesting pod/packageserver-7d4fc7d867-bnqhm container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 00:11:20 crc kubenswrapper[5118]: I0121 00:11:20.358334 5118 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-bnqhm" podUID="3ba5de99-4b50-4027-b9c6-f1fbb61a7146" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.37:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 00:11:20 crc kubenswrapper[5118]: I0121 00:11:20.359141 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tlb84\" (UID: \"0d503143-f75b-40e6-b0e3-d1bd595a05ae\") " pod="openshift-image-registry/image-registry-66587d64c8-tlb84" Jan 21 00:11:20 crc kubenswrapper[5118]: E0121 00:11:20.359434 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 00:11:20.859417319 +0000 UTC m=+136.183664397 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tlb84" (UID: "0d503143-f75b-40e6-b0e3-d1bd595a05ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:20 crc kubenswrapper[5118]: I0121 00:11:20.377689 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-5r9pr" event={"ID":"a651b3d8-5a1f-4da5-8d11-e6e3b1ef5d00","Type":"ContainerStarted","Data":"3ff85f1d6300e9395787d48e93f1c0f2a1727898f093606856ee28c33b663611"} Jan 21 00:11:20 crc kubenswrapper[5118]: I0121 00:11:20.378927 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-5r9pr" Jan 21 00:11:20 crc kubenswrapper[5118]: I0121 00:11:20.380299 5118 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-5r9pr container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/healthz\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Jan 21 00:11:20 crc kubenswrapper[5118]: I0121 00:11:20.380344 5118 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-5r9pr" podUID="a651b3d8-5a1f-4da5-8d11-e6e3b1ef5d00" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.26:8080/healthz\": dial tcp 10.217.0.26:8080: connect: connection refused" Jan 21 00:11:20 crc kubenswrapper[5118]: I0121 00:11:20.393582 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-5wlqh" event={"ID":"c9bfbcd7-e344-4287-b4ac-aaf5f9a446e9","Type":"ContainerStarted","Data":"83a9b7407ce9b6ae4895c1572f762b40a1e9a4f4992a60945e2089ed850594a5"} Jan 21 00:11:20 crc kubenswrapper[5118]: I0121 00:11:20.393630 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-5wlqh" event={"ID":"c9bfbcd7-e344-4287-b4ac-aaf5f9a446e9","Type":"ContainerStarted","Data":"ce902a5a64a811c4737ce502bcf5b04f38af953781fc467efaf3d748bef401d4"} Jan 21 00:11:20 crc kubenswrapper[5118]: I0121 00:11:20.404631 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-547dbd544d-5r9pr" podStartSLOduration=114.404618676 podStartE2EDuration="1m54.404618676s" podCreationTimestamp="2026-01-21 00:09:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:11:20.40202284 +0000 UTC m=+135.726269868" watchObservedRunningTime="2026-01-21 00:11:20.404618676 +0000 UTC m=+135.728865694" Jan 21 00:11:20 crc kubenswrapper[5118]: I0121 00:11:20.408632 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-s4czg" event={"ID":"f5911eba-6406-44b4-868f-a47787c95fdf","Type":"ContainerStarted","Data":"eb17001570ddca04f334e5efdd555b3184c931c773d0e257f6691b78d958bb87"} Jan 21 00:11:20 crc kubenswrapper[5118]: I0121 00:11:20.427797 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-jcv4b" event={"ID":"cd261347-6a59-453c-836f-31c195e37417","Type":"ContainerStarted","Data":"b0ff25f1ef2aace14650428df8645d82db59e5b60889adca9699ac4670322eec"} Jan 21 00:11:20 crc kubenswrapper[5118]: I0121 00:11:20.428012 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-jcv4b" event={"ID":"cd261347-6a59-453c-836f-31c195e37417","Type":"ContainerStarted","Data":"cf417a1fe41fc4532c445138734fdd84a8adf62469c80c4f58976b3b963a0310"} Jan 21 00:11:20 crc kubenswrapper[5118]: I0121 00:11:20.428717 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-dns/dns-default-jcv4b" Jan 21 00:11:20 crc kubenswrapper[5118]: I0121 00:11:20.445207 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-5wlqh" podStartSLOduration=114.445194115 podStartE2EDuration="1m54.445194115s" podCreationTimestamp="2026-01-21 00:09:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:11:20.442773454 +0000 UTC m=+135.767020472" watchObservedRunningTime="2026-01-21 00:11:20.445194115 +0000 UTC m=+135.769441133" Jan 21 00:11:20 crc kubenswrapper[5118]: I0121 00:11:20.445824 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-jdqmz" event={"ID":"6c9c3fd8-7f40-4eb4-9a40-6e13fec6579f","Type":"ContainerStarted","Data":"bc9b29fbe60fbf53aa4680f2c2288244351d8f4b653d04a0be7d84b293cc239a"} Jan 21 00:11:20 crc kubenswrapper[5118]: I0121 00:11:20.446039 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-config-operator/openshift-config-operator-5777786469-jdqmz" Jan 21 00:11:20 crc kubenswrapper[5118]: I0121 00:11:20.451674 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-jnbtq" event={"ID":"39000d94-11b8-42ff-a127-2136d0f2cc0b","Type":"ContainerStarted","Data":"64bb09240b9ea47bea21a9b73a9954c3a02781be69db6abe5ec10d656313a3da"} Jan 21 00:11:20 crc kubenswrapper[5118]: I0121 00:11:20.460556 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 00:11:20 crc kubenswrapper[5118]: E0121 00:11:20.460818 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 00:11:20.960790041 +0000 UTC m=+136.285037049 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:20 crc kubenswrapper[5118]: I0121 00:11:20.461295 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tlb84\" (UID: \"0d503143-f75b-40e6-b0e3-d1bd595a05ae\") " pod="openshift-image-registry/image-registry-66587d64c8-tlb84" Jan 21 00:11:20 crc kubenswrapper[5118]: E0121 00:11:20.462622 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 00:11:20.962611707 +0000 UTC m=+136.286858725 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tlb84" (UID: "0d503143-f75b-40e6-b0e3-d1bd595a05ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:20 crc kubenswrapper[5118]: I0121 00:11:20.471935 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6c5wr"] Jan 21 00:11:20 crc kubenswrapper[5118]: I0121 00:11:20.478006 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6c5wr" Jan 21 00:11:20 crc kubenswrapper[5118]: I0121 00:11:20.480743 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Jan 21 00:11:20 crc kubenswrapper[5118]: I0121 00:11:20.489020 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-s4czg" podStartSLOduration=114.489000826 podStartE2EDuration="1m54.489000826s" podCreationTimestamp="2026-01-21 00:09:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:11:20.488561615 +0000 UTC m=+135.812808633" watchObservedRunningTime="2026-01-21 00:11:20.489000826 +0000 UTC m=+135.813247844" Jan 21 00:11:20 crc kubenswrapper[5118]: I0121 00:11:20.501735 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-8zvdp" event={"ID":"a6c1fff3-93b2-4ea5-b74f-8f7715a04e5c","Type":"ContainerStarted","Data":"183c5d3ad8c38b732bceb1d56d16f975984022176a7a1be5914ad202ff3e1934"} Jan 21 00:11:20 crc kubenswrapper[5118]: I0121 00:11:20.501778 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-8zvdp" event={"ID":"a6c1fff3-93b2-4ea5-b74f-8f7715a04e5c","Type":"ContainerStarted","Data":"30f883505223d60721a52470fcafbdb438aa2aba6e57dc2fe92a55cf2324c4c8"} Jan 21 00:11:20 crc kubenswrapper[5118]: I0121 00:11:20.502075 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-8zvdp" Jan 21 00:11:20 crc kubenswrapper[5118]: I0121 00:11:20.514959 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-4vjlk" event={"ID":"946b14ca-c6a8-4020-bdb0-01d5ca69b536","Type":"ContainerStarted","Data":"eba843e1971fb7d26b1a5a5f12b12a717c3fb6241eba6228fae7df29b4af9bf6"} Jan 21 00:11:20 crc kubenswrapper[5118]: I0121 00:11:20.524135 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-jcv4b" podStartSLOduration=9.524113677 podStartE2EDuration="9.524113677s" podCreationTimestamp="2026-01-21 00:11:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:11:20.522878096 +0000 UTC m=+135.847125114" watchObservedRunningTime="2026-01-21 00:11:20.524113677 +0000 UTC m=+135.848360695" Jan 21 00:11:20 crc kubenswrapper[5118]: I0121 00:11:20.524509 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6c5wr"] Jan 21 00:11:20 crc kubenswrapper[5118]: I0121 00:11:20.551438 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-8t6fc" event={"ID":"4d8423b3-e68c-4083-859f-e89f705f28bd","Type":"ContainerStarted","Data":"9758b7415dcc5fc86fb6525250e569c9d71f064455a5fd649de2576b0c830e0f"} Jan 21 00:11:20 crc kubenswrapper[5118]: I0121 00:11:20.556209 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-5777786469-jdqmz" podStartSLOduration=114.556190321 podStartE2EDuration="1m54.556190321s" podCreationTimestamp="2026-01-21 00:09:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:11:20.553772829 +0000 UTC m=+135.878019847" watchObservedRunningTime="2026-01-21 00:11:20.556190321 +0000 UTC m=+135.880437339" Jan 21 00:11:20 crc kubenswrapper[5118]: I0121 00:11:20.562271 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 00:11:20 crc kubenswrapper[5118]: I0121 00:11:20.562428 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28172373-ad9f-4755-a060-b467a2817a67-catalog-content\") pod \"certified-operators-6c5wr\" (UID: \"28172373-ad9f-4755-a060-b467a2817a67\") " pod="openshift-marketplace/certified-operators-6c5wr" Jan 21 00:11:20 crc kubenswrapper[5118]: I0121 00:11:20.562487 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28172373-ad9f-4755-a060-b467a2817a67-utilities\") pod \"certified-operators-6c5wr\" (UID: \"28172373-ad9f-4755-a060-b467a2817a67\") " pod="openshift-marketplace/certified-operators-6c5wr" Jan 21 00:11:20 crc kubenswrapper[5118]: I0121 00:11:20.562557 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnxbp\" (UniqueName: \"kubernetes.io/projected/28172373-ad9f-4755-a060-b467a2817a67-kube-api-access-hnxbp\") pod \"certified-operators-6c5wr\" (UID: \"28172373-ad9f-4755-a060-b467a2817a67\") " pod="openshift-marketplace/certified-operators-6c5wr" Jan 21 00:11:20 crc kubenswrapper[5118]: E0121 00:11:20.563440 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 00:11:21.063424194 +0000 UTC m=+136.387671212 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:20 crc kubenswrapper[5118]: I0121 00:11:20.569365 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-2rnqz" event={"ID":"11f14664-de3b-4cae-af94-5367cc3f2f4b","Type":"ContainerStarted","Data":"231a7c2985e08a0a5b8f28b77e3c8bb1ec1f7c52f589975dbdc118557703169b"} Jan 21 00:11:20 crc kubenswrapper[5118]: I0121 00:11:20.569405 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-2rnqz" event={"ID":"11f14664-de3b-4cae-af94-5367cc3f2f4b","Type":"ContainerStarted","Data":"20d9ac7b74f98426b25a86cc0ac69350e62b81f38ffb955f2c703d810a166ee0"} Jan 21 00:11:20 crc kubenswrapper[5118]: I0121 00:11:20.584680 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-68cf44c8b8-jrk8q" Jan 21 00:11:20 crc kubenswrapper[5118]: I0121 00:11:20.586197 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-799b87ffcd-jnbtq" podStartSLOduration=114.586188042 podStartE2EDuration="1m54.586188042s" podCreationTimestamp="2026-01-21 00:09:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:11:20.58457088 +0000 UTC m=+135.908817898" watchObservedRunningTime="2026-01-21 00:11:20.586188042 +0000 UTC m=+135.910435060" Jan 21 00:11:20 crc kubenswrapper[5118]: I0121 00:11:20.591330 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-74545575db-p65gs" event={"ID":"8ec6d719-5d37-4de1-9afe-3e01bfe8d640","Type":"ContainerStarted","Data":"0630c6c22a2bfa0896dec066f09c3c4938299000ced02c7ecb3783699f910469"} Jan 21 00:11:20 crc kubenswrapper[5118]: I0121 00:11:20.601545 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-qq2q6" event={"ID":"5fa95ab2-7bd1-49ba-bbe4-31209e7b7a65","Type":"ContainerStarted","Data":"ceca7d6eaac26568746388db47178ef764fcc54a9a95485da5c12a386e63d36d"} Jan 21 00:11:20 crc kubenswrapper[5118]: I0121 00:11:20.649852 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-8zvdp" podStartSLOduration=114.649839146 podStartE2EDuration="1m54.649839146s" podCreationTimestamp="2026-01-21 00:09:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:11:20.61490734 +0000 UTC m=+135.939154358" watchObservedRunningTime="2026-01-21 00:11:20.649839146 +0000 UTC m=+135.974086164" Jan 21 00:11:20 crc kubenswrapper[5118]: I0121 00:11:20.650475 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-69b85846b6-cfnkd" event={"ID":"40b0f866-cb01-4820-863e-91d46a2fdda1","Type":"ContainerStarted","Data":"acd8920bbca23112fd125839b49624a227f06d1e12d3dea5de44a6991d5cb71f"} Jan 21 00:11:20 crc kubenswrapper[5118]: I0121 00:11:20.654837 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-s5pql"] Jan 21 00:11:20 crc kubenswrapper[5118]: I0121 00:11:20.663394 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28172373-ad9f-4755-a060-b467a2817a67-catalog-content\") pod \"certified-operators-6c5wr\" (UID: \"28172373-ad9f-4755-a060-b467a2817a67\") " pod="openshift-marketplace/certified-operators-6c5wr" Jan 21 00:11:20 crc kubenswrapper[5118]: I0121 00:11:20.663495 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28172373-ad9f-4755-a060-b467a2817a67-utilities\") pod \"certified-operators-6c5wr\" (UID: \"28172373-ad9f-4755-a060-b467a2817a67\") " pod="openshift-marketplace/certified-operators-6c5wr" Jan 21 00:11:20 crc kubenswrapper[5118]: I0121 00:11:20.663626 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hnxbp\" (UniqueName: \"kubernetes.io/projected/28172373-ad9f-4755-a060-b467a2817a67-kube-api-access-hnxbp\") pod \"certified-operators-6c5wr\" (UID: \"28172373-ad9f-4755-a060-b467a2817a67\") " pod="openshift-marketplace/certified-operators-6c5wr" Jan 21 00:11:20 crc kubenswrapper[5118]: I0121 00:11:20.663784 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tlb84\" (UID: \"0d503143-f75b-40e6-b0e3-d1bd595a05ae\") " pod="openshift-image-registry/image-registry-66587d64c8-tlb84" Jan 21 00:11:20 crc kubenswrapper[5118]: I0121 00:11:20.664817 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28172373-ad9f-4755-a060-b467a2817a67-catalog-content\") pod \"certified-operators-6c5wr\" (UID: \"28172373-ad9f-4755-a060-b467a2817a67\") " pod="openshift-marketplace/certified-operators-6c5wr" Jan 21 00:11:20 crc kubenswrapper[5118]: I0121 00:11:20.666824 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28172373-ad9f-4755-a060-b467a2817a67-utilities\") pod \"certified-operators-6c5wr\" (UID: \"28172373-ad9f-4755-a060-b467a2817a67\") " pod="openshift-marketplace/certified-operators-6c5wr" Jan 21 00:11:20 crc kubenswrapper[5118]: E0121 00:11:20.667618 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 00:11:21.167603417 +0000 UTC m=+136.491850435 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tlb84" (UID: "0d503143-f75b-40e6-b0e3-d1bd595a05ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:20 crc kubenswrapper[5118]: I0121 00:11:20.672075 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-wkjhb" event={"ID":"485b5bf0-70af-4e4a-b766-d9e63a94395f","Type":"ContainerStarted","Data":"7bea935e0c05945ed46a084f30dede0fa3314143504330229eea479d64db6097"} Jan 21 00:11:20 crc kubenswrapper[5118]: I0121 00:11:20.672259 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-wkjhb" event={"ID":"485b5bf0-70af-4e4a-b766-d9e63a94395f","Type":"ContainerStarted","Data":"10a5ab527257213fdba63736fbfc3a2b18ce252878e7a62dea7044259f5b281a"} Jan 21 00:11:20 crc kubenswrapper[5118]: I0121 00:11:20.672335 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s5pql"] Jan 21 00:11:20 crc kubenswrapper[5118]: I0121 00:11:20.672476 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s5pql" Jan 21 00:11:20 crc kubenswrapper[5118]: I0121 00:11:20.675978 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ingress/router-default-68cf44c8b8-jrk8q" Jan 21 00:11:20 crc kubenswrapper[5118]: I0121 00:11:20.681688 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Jan 21 00:11:20 crc kubenswrapper[5118]: I0121 00:11:20.683711 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-8596bd845d-8t6fc" podStartSLOduration=114.683696135 podStartE2EDuration="1m54.683696135s" podCreationTimestamp="2026-01-21 00:09:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:11:20.682071664 +0000 UTC m=+136.006318692" watchObservedRunningTime="2026-01-21 00:11:20.683696135 +0000 UTC m=+136.007943153" Jan 21 00:11:20 crc kubenswrapper[5118]: I0121 00:11:20.684009 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-68cf44c8b8-jrk8q" Jan 21 00:11:20 crc kubenswrapper[5118]: I0121 00:11:20.687440 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-bnqhm" Jan 21 00:11:20 crc kubenswrapper[5118]: I0121 00:11:20.697606 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnxbp\" (UniqueName: \"kubernetes.io/projected/28172373-ad9f-4755-a060-b467a2817a67-kube-api-access-hnxbp\") pod \"certified-operators-6c5wr\" (UID: \"28172373-ad9f-4755-a060-b467a2817a67\") " pod="openshift-marketplace/certified-operators-6c5wr" Jan 21 00:11:20 crc kubenswrapper[5118]: I0121 00:11:20.743483 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-2rnqz" podStartSLOduration=114.743465861 podStartE2EDuration="1m54.743465861s" podCreationTimestamp="2026-01-21 00:09:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:11:20.741691436 +0000 UTC m=+136.065938454" watchObservedRunningTime="2026-01-21 00:11:20.743465861 +0000 UTC m=+136.067712879" Jan 21 00:11:20 crc kubenswrapper[5118]: I0121 00:11:20.767610 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 00:11:20 crc kubenswrapper[5118]: I0121 00:11:20.768342 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67f8120d-af6d-4e77-9772-0fc55dfad0bf-utilities\") pod \"community-operators-s5pql\" (UID: \"67f8120d-af6d-4e77-9772-0fc55dfad0bf\") " pod="openshift-marketplace/community-operators-s5pql" Jan 21 00:11:20 crc kubenswrapper[5118]: I0121 00:11:20.768424 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v879m\" (UniqueName: \"kubernetes.io/projected/67f8120d-af6d-4e77-9772-0fc55dfad0bf-kube-api-access-v879m\") pod \"community-operators-s5pql\" (UID: \"67f8120d-af6d-4e77-9772-0fc55dfad0bf\") " pod="openshift-marketplace/community-operators-s5pql" Jan 21 00:11:20 crc kubenswrapper[5118]: I0121 00:11:20.768674 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67f8120d-af6d-4e77-9772-0fc55dfad0bf-catalog-content\") pod \"community-operators-s5pql\" (UID: \"67f8120d-af6d-4e77-9772-0fc55dfad0bf\") " pod="openshift-marketplace/community-operators-s5pql" Jan 21 00:11:20 crc kubenswrapper[5118]: E0121 00:11:20.769071 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 00:11:21.26905526 +0000 UTC m=+136.593302278 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:20 crc kubenswrapper[5118]: I0121 00:11:20.835911 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-69db94689b-wkjhb" podStartSLOduration=114.835898676 podStartE2EDuration="1m54.835898676s" podCreationTimestamp="2026-01-21 00:09:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:11:20.833461134 +0000 UTC m=+136.157708152" watchObservedRunningTime="2026-01-21 00:11:20.835898676 +0000 UTC m=+136.160145694" Jan 21 00:11:20 crc kubenswrapper[5118]: I0121 00:11:20.836679 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6c5wr" Jan 21 00:11:20 crc kubenswrapper[5118]: I0121 00:11:20.878879 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tlb84\" (UID: \"0d503143-f75b-40e6-b0e3-d1bd595a05ae\") " pod="openshift-image-registry/image-registry-66587d64c8-tlb84" Jan 21 00:11:20 crc kubenswrapper[5118]: I0121 00:11:20.878920 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67f8120d-af6d-4e77-9772-0fc55dfad0bf-catalog-content\") pod \"community-operators-s5pql\" (UID: \"67f8120d-af6d-4e77-9772-0fc55dfad0bf\") " pod="openshift-marketplace/community-operators-s5pql" Jan 21 00:11:20 crc kubenswrapper[5118]: I0121 00:11:20.879029 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67f8120d-af6d-4e77-9772-0fc55dfad0bf-utilities\") pod \"community-operators-s5pql\" (UID: \"67f8120d-af6d-4e77-9772-0fc55dfad0bf\") " pod="openshift-marketplace/community-operators-s5pql" Jan 21 00:11:20 crc kubenswrapper[5118]: I0121 00:11:20.879050 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-v879m\" (UniqueName: \"kubernetes.io/projected/67f8120d-af6d-4e77-9772-0fc55dfad0bf-kube-api-access-v879m\") pod \"community-operators-s5pql\" (UID: \"67f8120d-af6d-4e77-9772-0fc55dfad0bf\") " pod="openshift-marketplace/community-operators-s5pql" Jan 21 00:11:20 crc kubenswrapper[5118]: E0121 00:11:20.879571 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 00:11:21.379557833 +0000 UTC m=+136.703804851 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tlb84" (UID: "0d503143-f75b-40e6-b0e3-d1bd595a05ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:20 crc kubenswrapper[5118]: I0121 00:11:20.879802 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67f8120d-af6d-4e77-9772-0fc55dfad0bf-catalog-content\") pod \"community-operators-s5pql\" (UID: \"67f8120d-af6d-4e77-9772-0fc55dfad0bf\") " pod="openshift-marketplace/community-operators-s5pql" Jan 21 00:11:20 crc kubenswrapper[5118]: I0121 00:11:20.879874 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67f8120d-af6d-4e77-9772-0fc55dfad0bf-utilities\") pod \"community-operators-s5pql\" (UID: \"67f8120d-af6d-4e77-9772-0fc55dfad0bf\") " pod="openshift-marketplace/community-operators-s5pql" Jan 21 00:11:20 crc kubenswrapper[5118]: I0121 00:11:20.905073 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vp4hf"] Jan 21 00:11:20 crc kubenswrapper[5118]: I0121 00:11:20.916561 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vp4hf" Jan 21 00:11:20 crc kubenswrapper[5118]: I0121 00:11:20.928347 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-v879m\" (UniqueName: \"kubernetes.io/projected/67f8120d-af6d-4e77-9772-0fc55dfad0bf-kube-api-access-v879m\") pod \"community-operators-s5pql\" (UID: \"67f8120d-af6d-4e77-9772-0fc55dfad0bf\") " pod="openshift-marketplace/community-operators-s5pql" Jan 21 00:11:20 crc kubenswrapper[5118]: I0121 00:11:20.930347 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vp4hf"] Jan 21 00:11:21 crc kubenswrapper[5118]: I0121 00:11:20.999978 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 00:11:21 crc kubenswrapper[5118]: I0121 00:11:21.000335 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ee59881-8e70-4769-b92d-5df34a2b9130-catalog-content\") pod \"certified-operators-vp4hf\" (UID: \"3ee59881-8e70-4769-b92d-5df34a2b9130\") " pod="openshift-marketplace/certified-operators-vp4hf" Jan 21 00:11:21 crc kubenswrapper[5118]: I0121 00:11:21.000462 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ee59881-8e70-4769-b92d-5df34a2b9130-utilities\") pod \"certified-operators-vp4hf\" (UID: \"3ee59881-8e70-4769-b92d-5df34a2b9130\") " pod="openshift-marketplace/certified-operators-vp4hf" Jan 21 00:11:21 crc kubenswrapper[5118]: I0121 00:11:21.000580 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22lph\" (UniqueName: \"kubernetes.io/projected/3ee59881-8e70-4769-b92d-5df34a2b9130-kube-api-access-22lph\") pod \"certified-operators-vp4hf\" (UID: \"3ee59881-8e70-4769-b92d-5df34a2b9130\") " pod="openshift-marketplace/certified-operators-vp4hf" Jan 21 00:11:21 crc kubenswrapper[5118]: E0121 00:11:21.000726 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 00:11:21.500705057 +0000 UTC m=+136.824952095 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:21 crc kubenswrapper[5118]: I0121 00:11:21.007029 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s5pql" Jan 21 00:11:21 crc kubenswrapper[5118]: I0121 00:11:21.049935 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-s6c4w"] Jan 21 00:11:21 crc kubenswrapper[5118]: I0121 00:11:21.103948 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-22lph\" (UniqueName: \"kubernetes.io/projected/3ee59881-8e70-4769-b92d-5df34a2b9130-kube-api-access-22lph\") pod \"certified-operators-vp4hf\" (UID: \"3ee59881-8e70-4769-b92d-5df34a2b9130\") " pod="openshift-marketplace/certified-operators-vp4hf" Jan 21 00:11:21 crc kubenswrapper[5118]: I0121 00:11:21.104002 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ee59881-8e70-4769-b92d-5df34a2b9130-catalog-content\") pod \"certified-operators-vp4hf\" (UID: \"3ee59881-8e70-4769-b92d-5df34a2b9130\") " pod="openshift-marketplace/certified-operators-vp4hf" Jan 21 00:11:21 crc kubenswrapper[5118]: I0121 00:11:21.104029 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tlb84\" (UID: \"0d503143-f75b-40e6-b0e3-d1bd595a05ae\") " pod="openshift-image-registry/image-registry-66587d64c8-tlb84" Jan 21 00:11:21 crc kubenswrapper[5118]: I0121 00:11:21.104083 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ee59881-8e70-4769-b92d-5df34a2b9130-utilities\") pod \"certified-operators-vp4hf\" (UID: \"3ee59881-8e70-4769-b92d-5df34a2b9130\") " pod="openshift-marketplace/certified-operators-vp4hf" Jan 21 00:11:21 crc kubenswrapper[5118]: I0121 00:11:21.104517 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ee59881-8e70-4769-b92d-5df34a2b9130-utilities\") pod \"certified-operators-vp4hf\" (UID: \"3ee59881-8e70-4769-b92d-5df34a2b9130\") " pod="openshift-marketplace/certified-operators-vp4hf" Jan 21 00:11:21 crc kubenswrapper[5118]: E0121 00:11:21.104581 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 00:11:21.604564791 +0000 UTC m=+136.928811809 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tlb84" (UID: "0d503143-f75b-40e6-b0e3-d1bd595a05ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:21 crc kubenswrapper[5118]: I0121 00:11:21.104658 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ee59881-8e70-4769-b92d-5df34a2b9130-catalog-content\") pod \"certified-operators-vp4hf\" (UID: \"3ee59881-8e70-4769-b92d-5df34a2b9130\") " pod="openshift-marketplace/certified-operators-vp4hf" Jan 21 00:11:21 crc kubenswrapper[5118]: I0121 00:11:21.142944 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-22lph\" (UniqueName: \"kubernetes.io/projected/3ee59881-8e70-4769-b92d-5df34a2b9130-kube-api-access-22lph\") pod \"certified-operators-vp4hf\" (UID: \"3ee59881-8e70-4769-b92d-5df34a2b9130\") " pod="openshift-marketplace/certified-operators-vp4hf" Jan 21 00:11:21 crc kubenswrapper[5118]: I0121 00:11:21.205814 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 00:11:21 crc kubenswrapper[5118]: E0121 00:11:21.206299 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 00:11:21.70627925 +0000 UTC m=+137.030526268 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:21 crc kubenswrapper[5118]: I0121 00:11:21.257635 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vp4hf" Jan 21 00:11:21 crc kubenswrapper[5118]: I0121 00:11:21.280355 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s6c4w"] Jan 21 00:11:21 crc kubenswrapper[5118]: I0121 00:11:21.280518 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s6c4w" Jan 21 00:11:21 crc kubenswrapper[5118]: I0121 00:11:21.307593 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tlb84\" (UID: \"0d503143-f75b-40e6-b0e3-d1bd595a05ae\") " pod="openshift-image-registry/image-registry-66587d64c8-tlb84" Jan 21 00:11:21 crc kubenswrapper[5118]: E0121 00:11:21.307976 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 00:11:21.80795977 +0000 UTC m=+137.132206788 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tlb84" (UID: "0d503143-f75b-40e6-b0e3-d1bd595a05ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:21 crc kubenswrapper[5118]: I0121 00:11:21.356930 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6c5wr"] Jan 21 00:11:21 crc kubenswrapper[5118]: W0121 00:11:21.400924 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod28172373_ad9f_4755_a060_b467a2817a67.slice/crio-d8ab9e219a4a19d0af33e463e2dc0564e17bfaa3e59c850571275175d4b42aa9 WatchSource:0}: Error finding container d8ab9e219a4a19d0af33e463e2dc0564e17bfaa3e59c850571275175d4b42aa9: Status 404 returned error can't find the container with id d8ab9e219a4a19d0af33e463e2dc0564e17bfaa3e59c850571275175d4b42aa9 Jan 21 00:11:21 crc kubenswrapper[5118]: I0121 00:11:21.408462 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 00:11:21 crc kubenswrapper[5118]: I0121 00:11:21.408722 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f20612e2-22cd-486f-b881-af82d40bd144-catalog-content\") pod \"community-operators-s6c4w\" (UID: \"f20612e2-22cd-486f-b881-af82d40bd144\") " pod="openshift-marketplace/community-operators-s6c4w" Jan 21 00:11:21 crc kubenswrapper[5118]: I0121 00:11:21.408826 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfsgd\" (UniqueName: \"kubernetes.io/projected/f20612e2-22cd-486f-b881-af82d40bd144-kube-api-access-kfsgd\") pod \"community-operators-s6c4w\" (UID: \"f20612e2-22cd-486f-b881-af82d40bd144\") " pod="openshift-marketplace/community-operators-s6c4w" Jan 21 00:11:21 crc kubenswrapper[5118]: I0121 00:11:21.408876 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f20612e2-22cd-486f-b881-af82d40bd144-utilities\") pod \"community-operators-s6c4w\" (UID: \"f20612e2-22cd-486f-b881-af82d40bd144\") " pod="openshift-marketplace/community-operators-s6c4w" Jan 21 00:11:21 crc kubenswrapper[5118]: E0121 00:11:21.408983 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 00:11:21.908965032 +0000 UTC m=+137.233212050 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:21 crc kubenswrapper[5118]: I0121 00:11:21.510657 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kfsgd\" (UniqueName: \"kubernetes.io/projected/f20612e2-22cd-486f-b881-af82d40bd144-kube-api-access-kfsgd\") pod \"community-operators-s6c4w\" (UID: \"f20612e2-22cd-486f-b881-af82d40bd144\") " pod="openshift-marketplace/community-operators-s6c4w" Jan 21 00:11:21 crc kubenswrapper[5118]: I0121 00:11:21.511025 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f20612e2-22cd-486f-b881-af82d40bd144-utilities\") pod \"community-operators-s6c4w\" (UID: \"f20612e2-22cd-486f-b881-af82d40bd144\") " pod="openshift-marketplace/community-operators-s6c4w" Jan 21 00:11:21 crc kubenswrapper[5118]: I0121 00:11:21.511065 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tlb84\" (UID: \"0d503143-f75b-40e6-b0e3-d1bd595a05ae\") " pod="openshift-image-registry/image-registry-66587d64c8-tlb84" Jan 21 00:11:21 crc kubenswrapper[5118]: I0121 00:11:21.511186 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f20612e2-22cd-486f-b881-af82d40bd144-catalog-content\") pod \"community-operators-s6c4w\" (UID: \"f20612e2-22cd-486f-b881-af82d40bd144\") " pod="openshift-marketplace/community-operators-s6c4w" Jan 21 00:11:21 crc kubenswrapper[5118]: I0121 00:11:21.511691 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f20612e2-22cd-486f-b881-af82d40bd144-catalog-content\") pod \"community-operators-s6c4w\" (UID: \"f20612e2-22cd-486f-b881-af82d40bd144\") " pod="openshift-marketplace/community-operators-s6c4w" Jan 21 00:11:21 crc kubenswrapper[5118]: I0121 00:11:21.512278 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f20612e2-22cd-486f-b881-af82d40bd144-utilities\") pod \"community-operators-s6c4w\" (UID: \"f20612e2-22cd-486f-b881-af82d40bd144\") " pod="openshift-marketplace/community-operators-s6c4w" Jan 21 00:11:21 crc kubenswrapper[5118]: E0121 00:11:21.512560 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 00:11:22.012543469 +0000 UTC m=+137.336790497 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tlb84" (UID: "0d503143-f75b-40e6-b0e3-d1bd595a05ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:21 crc kubenswrapper[5118]: I0121 00:11:21.549403 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfsgd\" (UniqueName: \"kubernetes.io/projected/f20612e2-22cd-486f-b881-af82d40bd144-kube-api-access-kfsgd\") pod \"community-operators-s6c4w\" (UID: \"f20612e2-22cd-486f-b881-af82d40bd144\") " pod="openshift-marketplace/community-operators-s6c4w" Jan 21 00:11:21 crc kubenswrapper[5118]: I0121 00:11:21.611866 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 00:11:21 crc kubenswrapper[5118]: E0121 00:11:21.612270 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 00:11:22.112253669 +0000 UTC m=+137.436500687 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:21 crc kubenswrapper[5118]: I0121 00:11:21.619594 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s5pql"] Jan 21 00:11:21 crc kubenswrapper[5118]: I0121 00:11:21.624419 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s6c4w" Jan 21 00:11:21 crc kubenswrapper[5118]: W0121 00:11:21.663245 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod67f8120d_af6d_4e77_9772_0fc55dfad0bf.slice/crio-8f1144cce7ab9669ef3ff6f172160d821f545245f19e091238291f067a90b7d1 WatchSource:0}: Error finding container 8f1144cce7ab9669ef3ff6f172160d821f545245f19e091238291f067a90b7d1: Status 404 returned error can't find the container with id 8f1144cce7ab9669ef3ff6f172160d821f545245f19e091238291f067a90b7d1 Jan 21 00:11:21 crc kubenswrapper[5118]: I0121 00:11:21.668650 5118 ???:1] "http: TLS handshake error from 192.168.126.11:51784: no serving certificate available for the kubelet" Jan 21 00:11:21 crc kubenswrapper[5118]: I0121 00:11:21.670517 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vp4hf"] Jan 21 00:11:21 crc kubenswrapper[5118]: I0121 00:11:21.689094 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s5pql" event={"ID":"67f8120d-af6d-4e77-9772-0fc55dfad0bf","Type":"ContainerStarted","Data":"8f1144cce7ab9669ef3ff6f172160d821f545245f19e091238291f067a90b7d1"} Jan 21 00:11:21 crc kubenswrapper[5118]: W0121 00:11:21.689324 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3ee59881_8e70_4769_b92d_5df34a2b9130.slice/crio-ddd1c15ef5b2696ac24d79911a84ba947ac73b25484518561e2454b7a1f7e5fe WatchSource:0}: Error finding container ddd1c15ef5b2696ac24d79911a84ba947ac73b25484518561e2454b7a1f7e5fe: Status 404 returned error can't find the container with id ddd1c15ef5b2696ac24d79911a84ba947ac73b25484518561e2454b7a1f7e5fe Jan 21 00:11:21 crc kubenswrapper[5118]: I0121 00:11:21.699139 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6c5wr" event={"ID":"28172373-ad9f-4755-a060-b467a2817a67","Type":"ContainerStarted","Data":"b9875a0b11f299a2af24e66dff921ffd960c91393c8aa5db6efb7824925fb409"} Jan 21 00:11:21 crc kubenswrapper[5118]: I0121 00:11:21.699227 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6c5wr" event={"ID":"28172373-ad9f-4755-a060-b467a2817a67","Type":"ContainerStarted","Data":"d8ab9e219a4a19d0af33e463e2dc0564e17bfaa3e59c850571275175d4b42aa9"} Jan 21 00:11:21 crc kubenswrapper[5118]: I0121 00:11:21.713909 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tlb84\" (UID: \"0d503143-f75b-40e6-b0e3-d1bd595a05ae\") " pod="openshift-image-registry/image-registry-66587d64c8-tlb84" Jan 21 00:11:21 crc kubenswrapper[5118]: E0121 00:11:21.714341 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 00:11:22.214324328 +0000 UTC m=+137.538571346 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tlb84" (UID: "0d503143-f75b-40e6-b0e3-d1bd595a05ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:21 crc kubenswrapper[5118]: I0121 00:11:21.815621 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 00:11:21 crc kubenswrapper[5118]: E0121 00:11:21.816948 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 00:11:22.31692551 +0000 UTC m=+137.641172528 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:21 crc kubenswrapper[5118]: I0121 00:11:21.917170 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tlb84\" (UID: \"0d503143-f75b-40e6-b0e3-d1bd595a05ae\") " pod="openshift-image-registry/image-registry-66587d64c8-tlb84" Jan 21 00:11:21 crc kubenswrapper[5118]: E0121 00:11:21.917592 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 00:11:22.417573734 +0000 UTC m=+137.741820822 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tlb84" (UID: "0d503143-f75b-40e6-b0e3-d1bd595a05ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:22 crc kubenswrapper[5118]: I0121 00:11:22.001324 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s6c4w"] Jan 21 00:11:22 crc kubenswrapper[5118]: W0121 00:11:22.008880 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf20612e2_22cd_486f_b881_af82d40bd144.slice/crio-1fab0350ac91f797f39adcd128591ff7294013db463957042cfdac25c3886971 WatchSource:0}: Error finding container 1fab0350ac91f797f39adcd128591ff7294013db463957042cfdac25c3886971: Status 404 returned error can't find the container with id 1fab0350ac91f797f39adcd128591ff7294013db463957042cfdac25c3886971 Jan 21 00:11:22 crc kubenswrapper[5118]: I0121 00:11:22.018201 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 00:11:22 crc kubenswrapper[5118]: E0121 00:11:22.018431 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 00:11:22.518400661 +0000 UTC m=+137.842647689 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:22 crc kubenswrapper[5118]: I0121 00:11:22.018539 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tlb84\" (UID: \"0d503143-f75b-40e6-b0e3-d1bd595a05ae\") " pod="openshift-image-registry/image-registry-66587d64c8-tlb84" Jan 21 00:11:22 crc kubenswrapper[5118]: E0121 00:11:22.018824 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 00:11:22.518812272 +0000 UTC m=+137.843059290 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tlb84" (UID: "0d503143-f75b-40e6-b0e3-d1bd595a05ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:22 crc kubenswrapper[5118]: I0121 00:11:22.119502 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 00:11:22 crc kubenswrapper[5118]: E0121 00:11:22.119660 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 00:11:22.619627099 +0000 UTC m=+137.943874127 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:22 crc kubenswrapper[5118]: I0121 00:11:22.120058 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tlb84\" (UID: \"0d503143-f75b-40e6-b0e3-d1bd595a05ae\") " pod="openshift-image-registry/image-registry-66587d64c8-tlb84" Jan 21 00:11:22 crc kubenswrapper[5118]: E0121 00:11:22.120398 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 00:11:22.620385808 +0000 UTC m=+137.944632826 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tlb84" (UID: "0d503143-f75b-40e6-b0e3-d1bd595a05ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:22 crc kubenswrapper[5118]: I0121 00:11:22.167944 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-g77tt" podUID="26d1a4fa-1469-4128-bd56-c9a122b28068" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://1e6801320874cfa8e7840e091821e30de2bb270acdcb6fbda9f5cea2ffc991e6" gracePeriod=30 Jan 21 00:11:22 crc kubenswrapper[5118]: I0121 00:11:22.220755 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 00:11:22 crc kubenswrapper[5118]: E0121 00:11:22.220911 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 00:11:22.720885428 +0000 UTC m=+138.045132436 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:22 crc kubenswrapper[5118]: I0121 00:11:22.222620 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tlb84\" (UID: \"0d503143-f75b-40e6-b0e3-d1bd595a05ae\") " pod="openshift-image-registry/image-registry-66587d64c8-tlb84" Jan 21 00:11:22 crc kubenswrapper[5118]: E0121 00:11:22.224122 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 00:11:22.724104079 +0000 UTC m=+138.048351097 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tlb84" (UID: "0d503143-f75b-40e6-b0e3-d1bd595a05ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:22 crc kubenswrapper[5118]: I0121 00:11:22.264226 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-5r9pr" Jan 21 00:11:22 crc kubenswrapper[5118]: I0121 00:11:22.323340 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 00:11:22 crc kubenswrapper[5118]: E0121 00:11:22.324913 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 00:11:22.824857065 +0000 UTC m=+138.149104103 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:22 crc kubenswrapper[5118]: I0121 00:11:22.422232 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Jan 21 00:11:22 crc kubenswrapper[5118]: I0121 00:11:22.426192 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tlb84\" (UID: \"0d503143-f75b-40e6-b0e3-d1bd595a05ae\") " pod="openshift-image-registry/image-registry-66587d64c8-tlb84" Jan 21 00:11:22 crc kubenswrapper[5118]: E0121 00:11:22.426651 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 00:11:22.926630967 +0000 UTC m=+138.250878035 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tlb84" (UID: "0d503143-f75b-40e6-b0e3-d1bd595a05ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:22 crc kubenswrapper[5118]: I0121 00:11:22.527339 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 00:11:22 crc kubenswrapper[5118]: E0121 00:11:22.527563 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 00:11:23.027529396 +0000 UTC m=+138.351776504 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:22 crc kubenswrapper[5118]: E0121 00:11:22.536506 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 00:11:23.036494344 +0000 UTC m=+138.360741362 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tlb84" (UID: "0d503143-f75b-40e6-b0e3-d1bd595a05ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:22 crc kubenswrapper[5118]: I0121 00:11:22.536473 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tlb84\" (UID: \"0d503143-f75b-40e6-b0e3-d1bd595a05ae\") " pod="openshift-image-registry/image-registry-66587d64c8-tlb84" Jan 21 00:11:22 crc kubenswrapper[5118]: I0121 00:11:22.566231 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-5777786469-jdqmz" Jan 21 00:11:22 crc kubenswrapper[5118]: I0121 00:11:22.566268 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Jan 21 00:11:22 crc kubenswrapper[5118]: I0121 00:11:22.566445 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 21 00:11:22 crc kubenswrapper[5118]: I0121 00:11:22.571805 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler\"/\"kube-root-ca.crt\"" Jan 21 00:11:22 crc kubenswrapper[5118]: I0121 00:11:22.572150 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler\"/\"installer-sa-dockercfg-qpkss\"" Jan 21 00:11:22 crc kubenswrapper[5118]: I0121 00:11:22.639444 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 00:11:22 crc kubenswrapper[5118]: I0121 00:11:22.639905 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6dca5dc9-b7f4-449c-ab11-734b2faf1de6-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"6dca5dc9-b7f4-449c-ab11-734b2faf1de6\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 21 00:11:22 crc kubenswrapper[5118]: I0121 00:11:22.639978 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6dca5dc9-b7f4-449c-ab11-734b2faf1de6-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"6dca5dc9-b7f4-449c-ab11-734b2faf1de6\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 21 00:11:22 crc kubenswrapper[5118]: E0121 00:11:22.640145 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 00:11:23.140129923 +0000 UTC m=+138.464376941 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:22 crc kubenswrapper[5118]: I0121 00:11:22.651383 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4frhj"] Jan 21 00:11:22 crc kubenswrapper[5118]: I0121 00:11:22.655096 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4frhj" Jan 21 00:11:22 crc kubenswrapper[5118]: I0121 00:11:22.661436 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Jan 21 00:11:22 crc kubenswrapper[5118]: I0121 00:11:22.723496 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4frhj"] Jan 21 00:11:22 crc kubenswrapper[5118]: I0121 00:11:22.748488 5118 generic.go:358] "Generic (PLEG): container finished" podID="3ee59881-8e70-4769-b92d-5df34a2b9130" containerID="2e0699377325df9171380a1f21d033638c1575666d74a8e0db5997379f8ecd64" exitCode=0 Jan 21 00:11:22 crc kubenswrapper[5118]: I0121 00:11:22.748564 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vp4hf" event={"ID":"3ee59881-8e70-4769-b92d-5df34a2b9130","Type":"ContainerDied","Data":"2e0699377325df9171380a1f21d033638c1575666d74a8e0db5997379f8ecd64"} Jan 21 00:11:22 crc kubenswrapper[5118]: I0121 00:11:22.748605 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vp4hf" event={"ID":"3ee59881-8e70-4769-b92d-5df34a2b9130","Type":"ContainerStarted","Data":"ddd1c15ef5b2696ac24d79911a84ba947ac73b25484518561e2454b7a1f7e5fe"} Jan 21 00:11:22 crc kubenswrapper[5118]: I0121 00:11:22.749898 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d18361c6-e5b6-44f0-b6d4-4dae1ff8c741-utilities\") pod \"redhat-marketplace-4frhj\" (UID: \"d18361c6-e5b6-44f0-b6d4-4dae1ff8c741\") " pod="openshift-marketplace/redhat-marketplace-4frhj" Jan 21 00:11:22 crc kubenswrapper[5118]: I0121 00:11:22.749988 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-877zw\" (UniqueName: \"kubernetes.io/projected/d18361c6-e5b6-44f0-b6d4-4dae1ff8c741-kube-api-access-877zw\") pod \"redhat-marketplace-4frhj\" (UID: \"d18361c6-e5b6-44f0-b6d4-4dae1ff8c741\") " pod="openshift-marketplace/redhat-marketplace-4frhj" Jan 21 00:11:22 crc kubenswrapper[5118]: I0121 00:11:22.750071 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6dca5dc9-b7f4-449c-ab11-734b2faf1de6-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"6dca5dc9-b7f4-449c-ab11-734b2faf1de6\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 21 00:11:22 crc kubenswrapper[5118]: I0121 00:11:22.750505 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d18361c6-e5b6-44f0-b6d4-4dae1ff8c741-catalog-content\") pod \"redhat-marketplace-4frhj\" (UID: \"d18361c6-e5b6-44f0-b6d4-4dae1ff8c741\") " pod="openshift-marketplace/redhat-marketplace-4frhj" Jan 21 00:11:22 crc kubenswrapper[5118]: I0121 00:11:22.750632 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6dca5dc9-b7f4-449c-ab11-734b2faf1de6-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"6dca5dc9-b7f4-449c-ab11-734b2faf1de6\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 21 00:11:22 crc kubenswrapper[5118]: I0121 00:11:22.750687 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6dca5dc9-b7f4-449c-ab11-734b2faf1de6-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"6dca5dc9-b7f4-449c-ab11-734b2faf1de6\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 21 00:11:22 crc kubenswrapper[5118]: I0121 00:11:22.750730 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tlb84\" (UID: \"0d503143-f75b-40e6-b0e3-d1bd595a05ae\") " pod="openshift-image-registry/image-registry-66587d64c8-tlb84" Jan 21 00:11:22 crc kubenswrapper[5118]: E0121 00:11:22.751019 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 00:11:23.251005155 +0000 UTC m=+138.575252173 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tlb84" (UID: "0d503143-f75b-40e6-b0e3-d1bd595a05ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:22 crc kubenswrapper[5118]: I0121 00:11:22.751427 5118 generic.go:358] "Generic (PLEG): container finished" podID="28172373-ad9f-4755-a060-b467a2817a67" containerID="b9875a0b11f299a2af24e66dff921ffd960c91393c8aa5db6efb7824925fb409" exitCode=0 Jan 21 00:11:22 crc kubenswrapper[5118]: I0121 00:11:22.751508 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6c5wr" event={"ID":"28172373-ad9f-4755-a060-b467a2817a67","Type":"ContainerDied","Data":"b9875a0b11f299a2af24e66dff921ffd960c91393c8aa5db6efb7824925fb409"} Jan 21 00:11:22 crc kubenswrapper[5118]: I0121 00:11:22.776621 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s6c4w" event={"ID":"f20612e2-22cd-486f-b881-af82d40bd144","Type":"ContainerStarted","Data":"9e7b3bbb75db1e0c17cb689b1fca3ea06271f5e524a66b3c84d7b178658c950a"} Jan 21 00:11:22 crc kubenswrapper[5118]: I0121 00:11:22.776677 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s6c4w" event={"ID":"f20612e2-22cd-486f-b881-af82d40bd144","Type":"ContainerStarted","Data":"1fab0350ac91f797f39adcd128591ff7294013db463957042cfdac25c3886971"} Jan 21 00:11:22 crc kubenswrapper[5118]: I0121 00:11:22.792421 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-t9lqw" event={"ID":"8823ee71-944b-492d-8676-09a4f6e0103f","Type":"ContainerStarted","Data":"e009be4c1801371bc92b07ecf216ab1287989a658e8abf368896332ce93a6225"} Jan 21 00:11:22 crc kubenswrapper[5118]: I0121 00:11:22.820151 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6dca5dc9-b7f4-449c-ab11-734b2faf1de6-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"6dca5dc9-b7f4-449c-ab11-734b2faf1de6\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 21 00:11:22 crc kubenswrapper[5118]: I0121 00:11:22.857860 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 00:11:22 crc kubenswrapper[5118]: I0121 00:11:22.858093 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d18361c6-e5b6-44f0-b6d4-4dae1ff8c741-utilities\") pod \"redhat-marketplace-4frhj\" (UID: \"d18361c6-e5b6-44f0-b6d4-4dae1ff8c741\") " pod="openshift-marketplace/redhat-marketplace-4frhj" Jan 21 00:11:22 crc kubenswrapper[5118]: I0121 00:11:22.858125 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-877zw\" (UniqueName: \"kubernetes.io/projected/d18361c6-e5b6-44f0-b6d4-4dae1ff8c741-kube-api-access-877zw\") pod \"redhat-marketplace-4frhj\" (UID: \"d18361c6-e5b6-44f0-b6d4-4dae1ff8c741\") " pod="openshift-marketplace/redhat-marketplace-4frhj" Jan 21 00:11:22 crc kubenswrapper[5118]: I0121 00:11:22.858239 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d18361c6-e5b6-44f0-b6d4-4dae1ff8c741-catalog-content\") pod \"redhat-marketplace-4frhj\" (UID: \"d18361c6-e5b6-44f0-b6d4-4dae1ff8c741\") " pod="openshift-marketplace/redhat-marketplace-4frhj" Jan 21 00:11:22 crc kubenswrapper[5118]: E0121 00:11:22.858778 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 00:11:23.358759619 +0000 UTC m=+138.683006637 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:22 crc kubenswrapper[5118]: I0121 00:11:22.859186 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d18361c6-e5b6-44f0-b6d4-4dae1ff8c741-utilities\") pod \"redhat-marketplace-4frhj\" (UID: \"d18361c6-e5b6-44f0-b6d4-4dae1ff8c741\") " pod="openshift-marketplace/redhat-marketplace-4frhj" Jan 21 00:11:22 crc kubenswrapper[5118]: I0121 00:11:22.860014 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d18361c6-e5b6-44f0-b6d4-4dae1ff8c741-catalog-content\") pod \"redhat-marketplace-4frhj\" (UID: \"d18361c6-e5b6-44f0-b6d4-4dae1ff8c741\") " pod="openshift-marketplace/redhat-marketplace-4frhj" Jan 21 00:11:22 crc kubenswrapper[5118]: I0121 00:11:22.874452 5118 generic.go:358] "Generic (PLEG): container finished" podID="67f8120d-af6d-4e77-9772-0fc55dfad0bf" containerID="59f7aca7e9712f680c32c494ec92f6b88f1d57b3452239bdd98b0f81ea100e16" exitCode=0 Jan 21 00:11:22 crc kubenswrapper[5118]: I0121 00:11:22.876083 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s5pql" event={"ID":"67f8120d-af6d-4e77-9772-0fc55dfad0bf","Type":"ContainerDied","Data":"59f7aca7e9712f680c32c494ec92f6b88f1d57b3452239bdd98b0f81ea100e16"} Jan 21 00:11:22 crc kubenswrapper[5118]: I0121 00:11:22.890810 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-877zw\" (UniqueName: \"kubernetes.io/projected/d18361c6-e5b6-44f0-b6d4-4dae1ff8c741-kube-api-access-877zw\") pod \"redhat-marketplace-4frhj\" (UID: \"d18361c6-e5b6-44f0-b6d4-4dae1ff8c741\") " pod="openshift-marketplace/redhat-marketplace-4frhj" Jan 21 00:11:22 crc kubenswrapper[5118]: I0121 00:11:22.960944 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tlb84\" (UID: \"0d503143-f75b-40e6-b0e3-d1bd595a05ae\") " pod="openshift-image-registry/image-registry-66587d64c8-tlb84" Jan 21 00:11:22 crc kubenswrapper[5118]: I0121 00:11:22.961359 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 21 00:11:22 crc kubenswrapper[5118]: E0121 00:11:22.963223 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 00:11:23.463204878 +0000 UTC m=+138.787451966 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tlb84" (UID: "0d503143-f75b-40e6-b0e3-d1bd595a05ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:22 crc kubenswrapper[5118]: I0121 00:11:22.987359 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4frhj" Jan 21 00:11:23 crc kubenswrapper[5118]: I0121 00:11:23.063477 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 00:11:23 crc kubenswrapper[5118]: E0121 00:11:23.063857 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 00:11:23.563824771 +0000 UTC m=+138.888071789 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:23 crc kubenswrapper[5118]: I0121 00:11:23.064383 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mlmpf"] Jan 21 00:11:23 crc kubenswrapper[5118]: I0121 00:11:23.080005 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mlmpf" Jan 21 00:11:23 crc kubenswrapper[5118]: I0121 00:11:23.085954 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mlmpf"] Jan 21 00:11:23 crc kubenswrapper[5118]: I0121 00:11:23.165252 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mq4d\" (UniqueName: \"kubernetes.io/projected/5346f11a-11bb-4650-8de5-7988e8cb2bba-kube-api-access-2mq4d\") pod \"redhat-marketplace-mlmpf\" (UID: \"5346f11a-11bb-4650-8de5-7988e8cb2bba\") " pod="openshift-marketplace/redhat-marketplace-mlmpf" Jan 21 00:11:23 crc kubenswrapper[5118]: I0121 00:11:23.165302 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5346f11a-11bb-4650-8de5-7988e8cb2bba-utilities\") pod \"redhat-marketplace-mlmpf\" (UID: \"5346f11a-11bb-4650-8de5-7988e8cb2bba\") " pod="openshift-marketplace/redhat-marketplace-mlmpf" Jan 21 00:11:23 crc kubenswrapper[5118]: I0121 00:11:23.165425 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5346f11a-11bb-4650-8de5-7988e8cb2bba-catalog-content\") pod \"redhat-marketplace-mlmpf\" (UID: \"5346f11a-11bb-4650-8de5-7988e8cb2bba\") " pod="openshift-marketplace/redhat-marketplace-mlmpf" Jan 21 00:11:23 crc kubenswrapper[5118]: I0121 00:11:23.165471 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tlb84\" (UID: \"0d503143-f75b-40e6-b0e3-d1bd595a05ae\") " pod="openshift-image-registry/image-registry-66587d64c8-tlb84" Jan 21 00:11:23 crc kubenswrapper[5118]: E0121 00:11:23.165811 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 00:11:23.665797267 +0000 UTC m=+138.990044285 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tlb84" (UID: "0d503143-f75b-40e6-b0e3-d1bd595a05ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:23 crc kubenswrapper[5118]: I0121 00:11:23.266303 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 00:11:23 crc kubenswrapper[5118]: I0121 00:11:23.266540 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5346f11a-11bb-4650-8de5-7988e8cb2bba-catalog-content\") pod \"redhat-marketplace-mlmpf\" (UID: \"5346f11a-11bb-4650-8de5-7988e8cb2bba\") " pod="openshift-marketplace/redhat-marketplace-mlmpf" Jan 21 00:11:23 crc kubenswrapper[5118]: I0121 00:11:23.266660 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2mq4d\" (UniqueName: \"kubernetes.io/projected/5346f11a-11bb-4650-8de5-7988e8cb2bba-kube-api-access-2mq4d\") pod \"redhat-marketplace-mlmpf\" (UID: \"5346f11a-11bb-4650-8de5-7988e8cb2bba\") " pod="openshift-marketplace/redhat-marketplace-mlmpf" Jan 21 00:11:23 crc kubenswrapper[5118]: I0121 00:11:23.266687 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5346f11a-11bb-4650-8de5-7988e8cb2bba-utilities\") pod \"redhat-marketplace-mlmpf\" (UID: \"5346f11a-11bb-4650-8de5-7988e8cb2bba\") " pod="openshift-marketplace/redhat-marketplace-mlmpf" Jan 21 00:11:23 crc kubenswrapper[5118]: I0121 00:11:23.267301 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5346f11a-11bb-4650-8de5-7988e8cb2bba-utilities\") pod \"redhat-marketplace-mlmpf\" (UID: \"5346f11a-11bb-4650-8de5-7988e8cb2bba\") " pod="openshift-marketplace/redhat-marketplace-mlmpf" Jan 21 00:11:23 crc kubenswrapper[5118]: E0121 00:11:23.267401 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 00:11:23.767381784 +0000 UTC m=+139.091628802 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:23 crc kubenswrapper[5118]: I0121 00:11:23.267677 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5346f11a-11bb-4650-8de5-7988e8cb2bba-catalog-content\") pod \"redhat-marketplace-mlmpf\" (UID: \"5346f11a-11bb-4650-8de5-7988e8cb2bba\") " pod="openshift-marketplace/redhat-marketplace-mlmpf" Jan 21 00:11:23 crc kubenswrapper[5118]: I0121 00:11:23.296673 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mq4d\" (UniqueName: \"kubernetes.io/projected/5346f11a-11bb-4650-8de5-7988e8cb2bba-kube-api-access-2mq4d\") pod \"redhat-marketplace-mlmpf\" (UID: \"5346f11a-11bb-4650-8de5-7988e8cb2bba\") " pod="openshift-marketplace/redhat-marketplace-mlmpf" Jan 21 00:11:23 crc kubenswrapper[5118]: E0121 00:11:23.369716 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 00:11:23.86970012 +0000 UTC m=+139.193947148 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tlb84" (UID: "0d503143-f75b-40e6-b0e3-d1bd595a05ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:23 crc kubenswrapper[5118]: I0121 00:11:23.369903 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tlb84\" (UID: \"0d503143-f75b-40e6-b0e3-d1bd595a05ae\") " pod="openshift-image-registry/image-registry-66587d64c8-tlb84" Jan 21 00:11:23 crc kubenswrapper[5118]: I0121 00:11:23.416098 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mlmpf" Jan 21 00:11:23 crc kubenswrapper[5118]: I0121 00:11:23.469660 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Jan 21 00:11:23 crc kubenswrapper[5118]: I0121 00:11:23.471698 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 00:11:23 crc kubenswrapper[5118]: E0121 00:11:23.472222 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 00:11:23.97220117 +0000 UTC m=+139.296448188 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:23 crc kubenswrapper[5118]: W0121 00:11:23.480923 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod6dca5dc9_b7f4_449c_ab11_734b2faf1de6.slice/crio-3649f7cca0d41352c9a1badf9ae8c9c46528b8a1ad355be5e1b994ca7c2f313b WatchSource:0}: Error finding container 3649f7cca0d41352c9a1badf9ae8c9c46528b8a1ad355be5e1b994ca7c2f313b: Status 404 returned error can't find the container with id 3649f7cca0d41352c9a1badf9ae8c9c46528b8a1ad355be5e1b994ca7c2f313b Jan 21 00:11:23 crc kubenswrapper[5118]: I0121 00:11:23.517339 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4frhj"] Jan 21 00:11:23 crc kubenswrapper[5118]: W0121 00:11:23.538333 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd18361c6_e5b6_44f0_b6d4_4dae1ff8c741.slice/crio-4bda1af60e5cd2d9bf46add4c880292139ed48e42f293396b9906a2983c2f8f7 WatchSource:0}: Error finding container 4bda1af60e5cd2d9bf46add4c880292139ed48e42f293396b9906a2983c2f8f7: Status 404 returned error can't find the container with id 4bda1af60e5cd2d9bf46add4c880292139ed48e42f293396b9906a2983c2f8f7 Jan 21 00:11:23 crc kubenswrapper[5118]: I0121 00:11:23.573946 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tlb84\" (UID: \"0d503143-f75b-40e6-b0e3-d1bd595a05ae\") " pod="openshift-image-registry/image-registry-66587d64c8-tlb84" Jan 21 00:11:23 crc kubenswrapper[5118]: E0121 00:11:23.574348 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 00:11:24.07433234 +0000 UTC m=+139.398579358 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tlb84" (UID: "0d503143-f75b-40e6-b0e3-d1bd595a05ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:23 crc kubenswrapper[5118]: I0121 00:11:23.666067 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-94zfv"] Jan 21 00:11:23 crc kubenswrapper[5118]: I0121 00:11:23.681051 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-94zfv" Jan 21 00:11:23 crc kubenswrapper[5118]: I0121 00:11:23.681744 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 00:11:23 crc kubenswrapper[5118]: E0121 00:11:23.682820 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 00:11:24.182793962 +0000 UTC m=+139.507040980 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:23 crc kubenswrapper[5118]: I0121 00:11:23.684130 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Jan 21 00:11:23 crc kubenswrapper[5118]: I0121 00:11:23.693572 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-94zfv"] Jan 21 00:11:23 crc kubenswrapper[5118]: I0121 00:11:23.784259 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bl6xq\" (UniqueName: \"kubernetes.io/projected/8d65d512-7c64-462f-b40d-ad0252a88233-kube-api-access-bl6xq\") pod \"redhat-operators-94zfv\" (UID: \"8d65d512-7c64-462f-b40d-ad0252a88233\") " pod="openshift-marketplace/redhat-operators-94zfv" Jan 21 00:11:23 crc kubenswrapper[5118]: I0121 00:11:23.784348 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tlb84\" (UID: \"0d503143-f75b-40e6-b0e3-d1bd595a05ae\") " pod="openshift-image-registry/image-registry-66587d64c8-tlb84" Jan 21 00:11:23 crc kubenswrapper[5118]: I0121 00:11:23.784384 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d65d512-7c64-462f-b40d-ad0252a88233-utilities\") pod \"redhat-operators-94zfv\" (UID: \"8d65d512-7c64-462f-b40d-ad0252a88233\") " pod="openshift-marketplace/redhat-operators-94zfv" Jan 21 00:11:23 crc kubenswrapper[5118]: I0121 00:11:23.784419 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d65d512-7c64-462f-b40d-ad0252a88233-catalog-content\") pod \"redhat-operators-94zfv\" (UID: \"8d65d512-7c64-462f-b40d-ad0252a88233\") " pod="openshift-marketplace/redhat-operators-94zfv" Jan 21 00:11:23 crc kubenswrapper[5118]: E0121 00:11:23.784755 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 00:11:24.284741918 +0000 UTC m=+139.608988936 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tlb84" (UID: "0d503143-f75b-40e6-b0e3-d1bd595a05ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:23 crc kubenswrapper[5118]: I0121 00:11:23.802432 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mlmpf"] Jan 21 00:11:23 crc kubenswrapper[5118]: I0121 00:11:23.890057 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 00:11:23 crc kubenswrapper[5118]: I0121 00:11:23.890290 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bl6xq\" (UniqueName: \"kubernetes.io/projected/8d65d512-7c64-462f-b40d-ad0252a88233-kube-api-access-bl6xq\") pod \"redhat-operators-94zfv\" (UID: \"8d65d512-7c64-462f-b40d-ad0252a88233\") " pod="openshift-marketplace/redhat-operators-94zfv" Jan 21 00:11:23 crc kubenswrapper[5118]: I0121 00:11:23.890369 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d65d512-7c64-462f-b40d-ad0252a88233-utilities\") pod \"redhat-operators-94zfv\" (UID: \"8d65d512-7c64-462f-b40d-ad0252a88233\") " pod="openshift-marketplace/redhat-operators-94zfv" Jan 21 00:11:23 crc kubenswrapper[5118]: I0121 00:11:23.890404 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d65d512-7c64-462f-b40d-ad0252a88233-catalog-content\") pod \"redhat-operators-94zfv\" (UID: \"8d65d512-7c64-462f-b40d-ad0252a88233\") " pod="openshift-marketplace/redhat-operators-94zfv" Jan 21 00:11:23 crc kubenswrapper[5118]: I0121 00:11:23.890813 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d65d512-7c64-462f-b40d-ad0252a88233-catalog-content\") pod \"redhat-operators-94zfv\" (UID: \"8d65d512-7c64-462f-b40d-ad0252a88233\") " pod="openshift-marketplace/redhat-operators-94zfv" Jan 21 00:11:23 crc kubenswrapper[5118]: E0121 00:11:23.890950 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 00:11:24.390929682 +0000 UTC m=+139.715176700 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:23 crc kubenswrapper[5118]: I0121 00:11:23.891093 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d65d512-7c64-462f-b40d-ad0252a88233-utilities\") pod \"redhat-operators-94zfv\" (UID: \"8d65d512-7c64-462f-b40d-ad0252a88233\") " pod="openshift-marketplace/redhat-operators-94zfv" Jan 21 00:11:23 crc kubenswrapper[5118]: I0121 00:11:23.907997 5118 generic.go:358] "Generic (PLEG): container finished" podID="d18361c6-e5b6-44f0-b6d4-4dae1ff8c741" containerID="cebfc3ba7fc479b564338f122863b8ca7c1a43361cff35d80d5fb74f334357fb" exitCode=0 Jan 21 00:11:23 crc kubenswrapper[5118]: I0121 00:11:23.908090 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4frhj" event={"ID":"d18361c6-e5b6-44f0-b6d4-4dae1ff8c741","Type":"ContainerDied","Data":"cebfc3ba7fc479b564338f122863b8ca7c1a43361cff35d80d5fb74f334357fb"} Jan 21 00:11:23 crc kubenswrapper[5118]: I0121 00:11:23.908134 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4frhj" event={"ID":"d18361c6-e5b6-44f0-b6d4-4dae1ff8c741","Type":"ContainerStarted","Data":"4bda1af60e5cd2d9bf46add4c880292139ed48e42f293396b9906a2983c2f8f7"} Jan 21 00:11:23 crc kubenswrapper[5118]: I0121 00:11:23.926700 5118 generic.go:358] "Generic (PLEG): container finished" podID="f20612e2-22cd-486f-b881-af82d40bd144" containerID="9e7b3bbb75db1e0c17cb689b1fca3ea06271f5e524a66b3c84d7b178658c950a" exitCode=0 Jan 21 00:11:23 crc kubenswrapper[5118]: I0121 00:11:23.926790 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s6c4w" event={"ID":"f20612e2-22cd-486f-b881-af82d40bd144","Type":"ContainerDied","Data":"9e7b3bbb75db1e0c17cb689b1fca3ea06271f5e524a66b3c84d7b178658c950a"} Jan 21 00:11:23 crc kubenswrapper[5118]: I0121 00:11:23.930367 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bl6xq\" (UniqueName: \"kubernetes.io/projected/8d65d512-7c64-462f-b40d-ad0252a88233-kube-api-access-bl6xq\") pod \"redhat-operators-94zfv\" (UID: \"8d65d512-7c64-462f-b40d-ad0252a88233\") " pod="openshift-marketplace/redhat-operators-94zfv" Jan 21 00:11:23 crc kubenswrapper[5118]: I0121 00:11:23.940415 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mlmpf" event={"ID":"5346f11a-11bb-4650-8de5-7988e8cb2bba","Type":"ContainerStarted","Data":"173e3d498ece2647f4611dbd89757ab9daacd8d1b93a02af79d93bf72e675a9a"} Jan 21 00:11:23 crc kubenswrapper[5118]: I0121 00:11:23.952582 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"6dca5dc9-b7f4-449c-ab11-734b2faf1de6","Type":"ContainerStarted","Data":"3649f7cca0d41352c9a1badf9ae8c9c46528b8a1ad355be5e1b994ca7c2f313b"} Jan 21 00:11:23 crc kubenswrapper[5118]: I0121 00:11:23.991906 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tlb84\" (UID: \"0d503143-f75b-40e6-b0e3-d1bd595a05ae\") " pod="openshift-image-registry/image-registry-66587d64c8-tlb84" Jan 21 00:11:23 crc kubenswrapper[5118]: E0121 00:11:23.992417 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 00:11:24.492404126 +0000 UTC m=+139.816651144 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tlb84" (UID: "0d503143-f75b-40e6-b0e3-d1bd595a05ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:24 crc kubenswrapper[5118]: I0121 00:11:24.050577 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-94zfv" Jan 21 00:11:24 crc kubenswrapper[5118]: I0121 00:11:24.059622 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jssxn"] Jan 21 00:11:24 crc kubenswrapper[5118]: I0121 00:11:24.068984 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jssxn" Jan 21 00:11:24 crc kubenswrapper[5118]: I0121 00:11:24.078249 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jssxn"] Jan 21 00:11:24 crc kubenswrapper[5118]: I0121 00:11:24.102934 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 00:11:24 crc kubenswrapper[5118]: E0121 00:11:24.103459 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 00:11:24.603426162 +0000 UTC m=+139.927673180 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:24 crc kubenswrapper[5118]: I0121 00:11:24.205302 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssr2x\" (UniqueName: \"kubernetes.io/projected/a4b72480-d05d-4b1a-9b30-0d3e80ea6249-kube-api-access-ssr2x\") pod \"redhat-operators-jssxn\" (UID: \"a4b72480-d05d-4b1a-9b30-0d3e80ea6249\") " pod="openshift-marketplace/redhat-operators-jssxn" Jan 21 00:11:24 crc kubenswrapper[5118]: I0121 00:11:24.205408 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tlb84\" (UID: \"0d503143-f75b-40e6-b0e3-d1bd595a05ae\") " pod="openshift-image-registry/image-registry-66587d64c8-tlb84" Jan 21 00:11:24 crc kubenswrapper[5118]: I0121 00:11:24.205457 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4b72480-d05d-4b1a-9b30-0d3e80ea6249-catalog-content\") pod \"redhat-operators-jssxn\" (UID: \"a4b72480-d05d-4b1a-9b30-0d3e80ea6249\") " pod="openshift-marketplace/redhat-operators-jssxn" Jan 21 00:11:24 crc kubenswrapper[5118]: I0121 00:11:24.205520 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4b72480-d05d-4b1a-9b30-0d3e80ea6249-utilities\") pod \"redhat-operators-jssxn\" (UID: \"a4b72480-d05d-4b1a-9b30-0d3e80ea6249\") " pod="openshift-marketplace/redhat-operators-jssxn" Jan 21 00:11:24 crc kubenswrapper[5118]: E0121 00:11:24.205906 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 00:11:24.705890551 +0000 UTC m=+140.030137669 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tlb84" (UID: "0d503143-f75b-40e6-b0e3-d1bd595a05ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:24 crc kubenswrapper[5118]: I0121 00:11:24.277510 5118 ???:1] "http: TLS handshake error from 192.168.126.11:51792: no serving certificate available for the kubelet" Jan 21 00:11:24 crc kubenswrapper[5118]: I0121 00:11:24.307874 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 00:11:24 crc kubenswrapper[5118]: E0121 00:11:24.308016 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 00:11:24.807996841 +0000 UTC m=+140.132243859 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:24 crc kubenswrapper[5118]: E0121 00:11:24.308798 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 00:11:24.808786881 +0000 UTC m=+140.133033899 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tlb84" (UID: "0d503143-f75b-40e6-b0e3-d1bd595a05ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:24 crc kubenswrapper[5118]: I0121 00:11:24.310519 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tlb84\" (UID: \"0d503143-f75b-40e6-b0e3-d1bd595a05ae\") " pod="openshift-image-registry/image-registry-66587d64c8-tlb84" Jan 21 00:11:24 crc kubenswrapper[5118]: I0121 00:11:24.310617 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4b72480-d05d-4b1a-9b30-0d3e80ea6249-catalog-content\") pod \"redhat-operators-jssxn\" (UID: \"a4b72480-d05d-4b1a-9b30-0d3e80ea6249\") " pod="openshift-marketplace/redhat-operators-jssxn" Jan 21 00:11:24 crc kubenswrapper[5118]: I0121 00:11:24.310708 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4b72480-d05d-4b1a-9b30-0d3e80ea6249-utilities\") pod \"redhat-operators-jssxn\" (UID: \"a4b72480-d05d-4b1a-9b30-0d3e80ea6249\") " pod="openshift-marketplace/redhat-operators-jssxn" Jan 21 00:11:24 crc kubenswrapper[5118]: I0121 00:11:24.310936 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ssr2x\" (UniqueName: \"kubernetes.io/projected/a4b72480-d05d-4b1a-9b30-0d3e80ea6249-kube-api-access-ssr2x\") pod \"redhat-operators-jssxn\" (UID: \"a4b72480-d05d-4b1a-9b30-0d3e80ea6249\") " pod="openshift-marketplace/redhat-operators-jssxn" Jan 21 00:11:24 crc kubenswrapper[5118]: I0121 00:11:24.312012 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4b72480-d05d-4b1a-9b30-0d3e80ea6249-utilities\") pod \"redhat-operators-jssxn\" (UID: \"a4b72480-d05d-4b1a-9b30-0d3e80ea6249\") " pod="openshift-marketplace/redhat-operators-jssxn" Jan 21 00:11:24 crc kubenswrapper[5118]: I0121 00:11:24.314417 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4b72480-d05d-4b1a-9b30-0d3e80ea6249-catalog-content\") pod \"redhat-operators-jssxn\" (UID: \"a4b72480-d05d-4b1a-9b30-0d3e80ea6249\") " pod="openshift-marketplace/redhat-operators-jssxn" Jan 21 00:11:24 crc kubenswrapper[5118]: I0121 00:11:24.367271 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssr2x\" (UniqueName: \"kubernetes.io/projected/a4b72480-d05d-4b1a-9b30-0d3e80ea6249-kube-api-access-ssr2x\") pod \"redhat-operators-jssxn\" (UID: \"a4b72480-d05d-4b1a-9b30-0d3e80ea6249\") " pod="openshift-marketplace/redhat-operators-jssxn" Jan 21 00:11:24 crc kubenswrapper[5118]: I0121 00:11:24.397382 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jssxn" Jan 21 00:11:24 crc kubenswrapper[5118]: I0121 00:11:24.411473 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-94zfv"] Jan 21 00:11:24 crc kubenswrapper[5118]: I0121 00:11:24.412037 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 00:11:24 crc kubenswrapper[5118]: E0121 00:11:24.412249 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 00:11:24.912224445 +0000 UTC m=+140.236471463 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:24 crc kubenswrapper[5118]: I0121 00:11:24.412427 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tlb84\" (UID: \"0d503143-f75b-40e6-b0e3-d1bd595a05ae\") " pod="openshift-image-registry/image-registry-66587d64c8-tlb84" Jan 21 00:11:24 crc kubenswrapper[5118]: E0121 00:11:24.412893 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 00:11:24.912876742 +0000 UTC m=+140.237123820 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tlb84" (UID: "0d503143-f75b-40e6-b0e3-d1bd595a05ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:24 crc kubenswrapper[5118]: W0121 00:11:24.434534 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8d65d512_7c64_462f_b40d_ad0252a88233.slice/crio-1eda30b2e64eed866377c347865801f347beff47b57e88957bab7bcecde38551 WatchSource:0}: Error finding container 1eda30b2e64eed866377c347865801f347beff47b57e88957bab7bcecde38551: Status 404 returned error can't find the container with id 1eda30b2e64eed866377c347865801f347beff47b57e88957bab7bcecde38551 Jan 21 00:11:24 crc kubenswrapper[5118]: I0121 00:11:24.516706 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 00:11:24 crc kubenswrapper[5118]: E0121 00:11:24.517197 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 00:11:25.017174337 +0000 UTC m=+140.341421345 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:24 crc kubenswrapper[5118]: I0121 00:11:24.619241 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tlb84\" (UID: \"0d503143-f75b-40e6-b0e3-d1bd595a05ae\") " pod="openshift-image-registry/image-registry-66587d64c8-tlb84" Jan 21 00:11:24 crc kubenswrapper[5118]: E0121 00:11:24.619623 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 00:11:25.119606626 +0000 UTC m=+140.443853644 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tlb84" (UID: "0d503143-f75b-40e6-b0e3-d1bd595a05ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:24 crc kubenswrapper[5118]: I0121 00:11:24.720982 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 00:11:24 crc kubenswrapper[5118]: E0121 00:11:24.721249 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 00:11:25.221200792 +0000 UTC m=+140.545447810 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:24 crc kubenswrapper[5118]: I0121 00:11:24.721606 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tlb84\" (UID: \"0d503143-f75b-40e6-b0e3-d1bd595a05ae\") " pod="openshift-image-registry/image-registry-66587d64c8-tlb84" Jan 21 00:11:24 crc kubenswrapper[5118]: E0121 00:11:24.721991 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 00:11:25.221974622 +0000 UTC m=+140.546221640 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tlb84" (UID: "0d503143-f75b-40e6-b0e3-d1bd595a05ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:24 crc kubenswrapper[5118]: I0121 00:11:24.813089 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-8596bd845d-8t6fc" Jan 21 00:11:24 crc kubenswrapper[5118]: I0121 00:11:24.813149 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-oauth-apiserver/apiserver-8596bd845d-8t6fc" Jan 21 00:11:24 crc kubenswrapper[5118]: I0121 00:11:24.824289 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 00:11:24 crc kubenswrapper[5118]: E0121 00:11:24.825329 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 00:11:25.325261862 +0000 UTC m=+140.649508880 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:24 crc kubenswrapper[5118]: I0121 00:11:24.825925 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-8596bd845d-8t6fc" Jan 21 00:11:24 crc kubenswrapper[5118]: I0121 00:11:24.827288 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tlb84\" (UID: \"0d503143-f75b-40e6-b0e3-d1bd595a05ae\") " pod="openshift-image-registry/image-registry-66587d64c8-tlb84" Jan 21 00:11:24 crc kubenswrapper[5118]: E0121 00:11:24.827786 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 00:11:25.327772405 +0000 UTC m=+140.652019423 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tlb84" (UID: "0d503143-f75b-40e6-b0e3-d1bd595a05ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:24 crc kubenswrapper[5118]: I0121 00:11:24.878809 5118 patch_prober.go:28] interesting pod/downloads-747b44746d-pdh68 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.31:8080/\": dial tcp 10.217.0.31:8080: connect: connection refused" start-of-body= Jan 21 00:11:24 crc kubenswrapper[5118]: I0121 00:11:24.878874 5118 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-pdh68" podUID="f2431df6-6390-4fb8-b13e-56750ad2fed4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.31:8080/\": dial tcp 10.217.0.31:8080: connect: connection refused" Jan 21 00:11:24 crc kubenswrapper[5118]: I0121 00:11:24.923800 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jssxn"] Jan 21 00:11:24 crc kubenswrapper[5118]: I0121 00:11:24.928918 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 00:11:24 crc kubenswrapper[5118]: E0121 00:11:24.929734 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 00:11:25.429702341 +0000 UTC m=+140.753949379 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:25 crc kubenswrapper[5118]: I0121 00:11:25.033346 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tlb84\" (UID: \"0d503143-f75b-40e6-b0e3-d1bd595a05ae\") " pod="openshift-image-registry/image-registry-66587d64c8-tlb84" Jan 21 00:11:25 crc kubenswrapper[5118]: E0121 00:11:25.033771 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 00:11:25.533755611 +0000 UTC m=+140.858002639 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tlb84" (UID: "0d503143-f75b-40e6-b0e3-d1bd595a05ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:25 crc kubenswrapper[5118]: I0121 00:11:25.054865 5118 generic.go:358] "Generic (PLEG): container finished" podID="5346f11a-11bb-4650-8de5-7988e8cb2bba" containerID="764bdc246fb04d2d4abaabb1979042f3858b5ae71628243a62d9f523ba022448" exitCode=0 Jan 21 00:11:25 crc kubenswrapper[5118]: I0121 00:11:25.054943 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mlmpf" event={"ID":"5346f11a-11bb-4650-8de5-7988e8cb2bba","Type":"ContainerDied","Data":"764bdc246fb04d2d4abaabb1979042f3858b5ae71628243a62d9f523ba022448"} Jan 21 00:11:25 crc kubenswrapper[5118]: I0121 00:11:25.060458 5118 generic.go:358] "Generic (PLEG): container finished" podID="6dca5dc9-b7f4-449c-ab11-734b2faf1de6" containerID="66de08152f0c4668d7a7af52365297d2df4feb9784ee8f2eadd00846c1a96d88" exitCode=0 Jan 21 00:11:25 crc kubenswrapper[5118]: I0121 00:11:25.060535 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"6dca5dc9-b7f4-449c-ab11-734b2faf1de6","Type":"ContainerDied","Data":"66de08152f0c4668d7a7af52365297d2df4feb9784ee8f2eadd00846c1a96d88"} Jan 21 00:11:25 crc kubenswrapper[5118]: I0121 00:11:25.084030 5118 generic.go:358] "Generic (PLEG): container finished" podID="8d65d512-7c64-462f-b40d-ad0252a88233" containerID="ebcc1f80caeef4fa7c41b04e8b8c47f377a3c8d8d10e632e53e1df4b223a2a4f" exitCode=0 Jan 21 00:11:25 crc kubenswrapper[5118]: I0121 00:11:25.084198 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-94zfv" event={"ID":"8d65d512-7c64-462f-b40d-ad0252a88233","Type":"ContainerDied","Data":"ebcc1f80caeef4fa7c41b04e8b8c47f377a3c8d8d10e632e53e1df4b223a2a4f"} Jan 21 00:11:25 crc kubenswrapper[5118]: I0121 00:11:25.084227 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-94zfv" event={"ID":"8d65d512-7c64-462f-b40d-ad0252a88233","Type":"ContainerStarted","Data":"1eda30b2e64eed866377c347865801f347beff47b57e88957bab7bcecde38551"} Jan 21 00:11:25 crc kubenswrapper[5118]: I0121 00:11:25.100852 5118 generic.go:358] "Generic (PLEG): container finished" podID="7c693d48-122b-44a7-8257-f4f312e980aa" containerID="2baf03122af0e60a3556f963c6ab1e5d2f09f592ff116073005a79888cd27156" exitCode=0 Jan 21 00:11:25 crc kubenswrapper[5118]: I0121 00:11:25.101125 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482560-6bcnb" event={"ID":"7c693d48-122b-44a7-8257-f4f312e980aa","Type":"ContainerDied","Data":"2baf03122af0e60a3556f963c6ab1e5d2f09f592ff116073005a79888cd27156"} Jan 21 00:11:25 crc kubenswrapper[5118]: I0121 00:11:25.111415 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-8596bd845d-8t6fc" Jan 21 00:11:25 crc kubenswrapper[5118]: I0121 00:11:25.134921 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 00:11:25 crc kubenswrapper[5118]: E0121 00:11:25.135417 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 00:11:25.635381928 +0000 UTC m=+140.959628946 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:25 crc kubenswrapper[5118]: I0121 00:11:25.236245 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tlb84\" (UID: \"0d503143-f75b-40e6-b0e3-d1bd595a05ae\") " pod="openshift-image-registry/image-registry-66587d64c8-tlb84" Jan 21 00:11:25 crc kubenswrapper[5118]: E0121 00:11:25.236562 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 00:11:25.736545845 +0000 UTC m=+141.060792863 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tlb84" (UID: "0d503143-f75b-40e6-b0e3-d1bd595a05ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:25 crc kubenswrapper[5118]: I0121 00:11:25.338471 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 00:11:25 crc kubenswrapper[5118]: E0121 00:11:25.339244 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 00:11:25.839222419 +0000 UTC m=+141.163469437 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:25 crc kubenswrapper[5118]: I0121 00:11:25.407211 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/console-64d44f6ddf-xbtg4" Jan 21 00:11:25 crc kubenswrapper[5118]: I0121 00:11:25.407412 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-64d44f6ddf-xbtg4" Jan 21 00:11:25 crc kubenswrapper[5118]: I0121 00:11:25.409462 5118 patch_prober.go:28] interesting pod/console-64d44f6ddf-xbtg4 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.25:8443/health\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Jan 21 00:11:25 crc kubenswrapper[5118]: I0121 00:11:25.409537 5118 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-64d44f6ddf-xbtg4" podUID="6f6f802d-add4-4e9d-bbbe-d3fd2eefd62a" containerName="console" probeResult="failure" output="Get \"https://10.217.0.25:8443/health\": dial tcp 10.217.0.25:8443: connect: connection refused" Jan 21 00:11:25 crc kubenswrapper[5118]: I0121 00:11:25.441014 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tlb84\" (UID: \"0d503143-f75b-40e6-b0e3-d1bd595a05ae\") " pod="openshift-image-registry/image-registry-66587d64c8-tlb84" Jan 21 00:11:25 crc kubenswrapper[5118]: E0121 00:11:25.441606 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 00:11:25.941585436 +0000 UTC m=+141.265832454 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tlb84" (UID: "0d503143-f75b-40e6-b0e3-d1bd595a05ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:25 crc kubenswrapper[5118]: I0121 00:11:25.542759 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 00:11:25 crc kubenswrapper[5118]: E0121 00:11:25.542975 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 00:11:26.042934827 +0000 UTC m=+141.367181845 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:25 crc kubenswrapper[5118]: I0121 00:11:25.543273 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tlb84\" (UID: \"0d503143-f75b-40e6-b0e3-d1bd595a05ae\") " pod="openshift-image-registry/image-registry-66587d64c8-tlb84" Jan 21 00:11:25 crc kubenswrapper[5118]: E0121 00:11:25.543613 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 00:11:26.043601464 +0000 UTC m=+141.367848472 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tlb84" (UID: "0d503143-f75b-40e6-b0e3-d1bd595a05ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:25 crc kubenswrapper[5118]: I0121 00:11:25.645511 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 00:11:25 crc kubenswrapper[5118]: E0121 00:11:25.646046 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 00:11:26.14596117 +0000 UTC m=+141.470208208 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:25 crc kubenswrapper[5118]: E0121 00:11:25.747575 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 00:11:26.247558667 +0000 UTC m=+141.571805675 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tlb84" (UID: "0d503143-f75b-40e6-b0e3-d1bd595a05ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:25 crc kubenswrapper[5118]: I0121 00:11:25.747220 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tlb84\" (UID: \"0d503143-f75b-40e6-b0e3-d1bd595a05ae\") " pod="openshift-image-registry/image-registry-66587d64c8-tlb84" Jan 21 00:11:25 crc kubenswrapper[5118]: I0121 00:11:25.848925 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 00:11:25 crc kubenswrapper[5118]: E0121 00:11:25.849094 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 00:11:26.349051232 +0000 UTC m=+141.673298250 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:25 crc kubenswrapper[5118]: I0121 00:11:25.849454 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tlb84\" (UID: \"0d503143-f75b-40e6-b0e3-d1bd595a05ae\") " pod="openshift-image-registry/image-registry-66587d64c8-tlb84" Jan 21 00:11:25 crc kubenswrapper[5118]: E0121 00:11:25.849886 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 00:11:26.349876323 +0000 UTC m=+141.674123341 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tlb84" (UID: "0d503143-f75b-40e6-b0e3-d1bd595a05ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:25 crc kubenswrapper[5118]: I0121 00:11:25.885376 5118 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 21 00:11:25 crc kubenswrapper[5118]: I0121 00:11:25.951016 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 00:11:25 crc kubenswrapper[5118]: E0121 00:11:25.951422 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 00:11:26.451403718 +0000 UTC m=+141.775650736 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:26 crc kubenswrapper[5118]: I0121 00:11:26.054508 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tlb84\" (UID: \"0d503143-f75b-40e6-b0e3-d1bd595a05ae\") " pod="openshift-image-registry/image-registry-66587d64c8-tlb84" Jan 21 00:11:26 crc kubenswrapper[5118]: E0121 00:11:26.055855 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 00:11:26.555840428 +0000 UTC m=+141.880087446 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tlb84" (UID: "0d503143-f75b-40e6-b0e3-d1bd595a05ae") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 00:11:26 crc kubenswrapper[5118]: I0121 00:11:26.113274 5118 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-21T00:11:25.885416344Z","UUID":"95d74b19-b51a-405c-8e10-7bc3f4d90ccc","Handler":null,"Name":"","Endpoint":""} Jan 21 00:11:26 crc kubenswrapper[5118]: I0121 00:11:26.116376 5118 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 21 00:11:26 crc kubenswrapper[5118]: I0121 00:11:26.116437 5118 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 21 00:11:26 crc kubenswrapper[5118]: I0121 00:11:26.118965 5118 generic.go:358] "Generic (PLEG): container finished" podID="a4b72480-d05d-4b1a-9b30-0d3e80ea6249" containerID="94fbe24572a04ebeedb2588530d72197a47094babf66f631703d42161954206b" exitCode=0 Jan 21 00:11:26 crc kubenswrapper[5118]: I0121 00:11:26.119050 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jssxn" event={"ID":"a4b72480-d05d-4b1a-9b30-0d3e80ea6249","Type":"ContainerDied","Data":"94fbe24572a04ebeedb2588530d72197a47094babf66f631703d42161954206b"} Jan 21 00:11:26 crc kubenswrapper[5118]: I0121 00:11:26.119074 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jssxn" event={"ID":"a4b72480-d05d-4b1a-9b30-0d3e80ea6249","Type":"ContainerStarted","Data":"d179a10ab8dc512dbf748cb54fbac343e86ffa735addc35494f6528b1d83a95c"} Jan 21 00:11:26 crc kubenswrapper[5118]: I0121 00:11:26.123984 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-t9lqw" event={"ID":"8823ee71-944b-492d-8676-09a4f6e0103f","Type":"ContainerStarted","Data":"3bfba2183ceb4d29459e27604a0c2f432dead0a5b39099234ae9ba2a2ae495e9"} Jan 21 00:11:26 crc kubenswrapper[5118]: I0121 00:11:26.158646 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 00:11:26 crc kubenswrapper[5118]: I0121 00:11:26.163387 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (OuterVolumeSpecName: "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2". PluginName "kubernetes.io/csi", VolumeGIDValue "" Jan 21 00:11:26 crc kubenswrapper[5118]: I0121 00:11:26.262763 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tlb84\" (UID: \"0d503143-f75b-40e6-b0e3-d1bd595a05ae\") " pod="openshift-image-registry/image-registry-66587d64c8-tlb84" Jan 21 00:11:26 crc kubenswrapper[5118]: I0121 00:11:26.266954 5118 csi_attacher.go:373] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 21 00:11:26 crc kubenswrapper[5118]: I0121 00:11:26.266985 5118 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tlb84\" (UID: \"0d503143-f75b-40e6-b0e3-d1bd595a05ae\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1264ac67579ad07e7e9003054d44fe40dd55285a4b2f7dc74e48be1aee0868a/globalmount\"" pod="openshift-image-registry/image-registry-66587d64c8-tlb84" Jan 21 00:11:26 crc kubenswrapper[5118]: I0121 00:11:26.290024 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tlb84\" (UID: \"0d503143-f75b-40e6-b0e3-d1bd595a05ae\") " pod="openshift-image-registry/image-registry-66587d64c8-tlb84" Jan 21 00:11:26 crc kubenswrapper[5118]: I0121 00:11:26.427322 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482560-6bcnb" Jan 21 00:11:26 crc kubenswrapper[5118]: I0121 00:11:26.432770 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 21 00:11:26 crc kubenswrapper[5118]: I0121 00:11:26.568402 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hn28v\" (UniqueName: \"kubernetes.io/projected/7c693d48-122b-44a7-8257-f4f312e980aa-kube-api-access-hn28v\") pod \"7c693d48-122b-44a7-8257-f4f312e980aa\" (UID: \"7c693d48-122b-44a7-8257-f4f312e980aa\") " Jan 21 00:11:26 crc kubenswrapper[5118]: I0121 00:11:26.568885 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7c693d48-122b-44a7-8257-f4f312e980aa-secret-volume\") pod \"7c693d48-122b-44a7-8257-f4f312e980aa\" (UID: \"7c693d48-122b-44a7-8257-f4f312e980aa\") " Jan 21 00:11:26 crc kubenswrapper[5118]: I0121 00:11:26.568909 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6dca5dc9-b7f4-449c-ab11-734b2faf1de6-kube-api-access\") pod \"6dca5dc9-b7f4-449c-ab11-734b2faf1de6\" (UID: \"6dca5dc9-b7f4-449c-ab11-734b2faf1de6\") " Jan 21 00:11:26 crc kubenswrapper[5118]: I0121 00:11:26.568930 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6dca5dc9-b7f4-449c-ab11-734b2faf1de6-kubelet-dir\") pod \"6dca5dc9-b7f4-449c-ab11-734b2faf1de6\" (UID: \"6dca5dc9-b7f4-449c-ab11-734b2faf1de6\") " Jan 21 00:11:26 crc kubenswrapper[5118]: I0121 00:11:26.568990 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7c693d48-122b-44a7-8257-f4f312e980aa-config-volume\") pod \"7c693d48-122b-44a7-8257-f4f312e980aa\" (UID: \"7c693d48-122b-44a7-8257-f4f312e980aa\") " Jan 21 00:11:26 crc kubenswrapper[5118]: I0121 00:11:26.569404 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6dca5dc9-b7f4-449c-ab11-734b2faf1de6-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "6dca5dc9-b7f4-449c-ab11-734b2faf1de6" (UID: "6dca5dc9-b7f4-449c-ab11-734b2faf1de6"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 00:11:26 crc kubenswrapper[5118]: I0121 00:11:26.569896 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c693d48-122b-44a7-8257-f4f312e980aa-config-volume" (OuterVolumeSpecName: "config-volume") pod "7c693d48-122b-44a7-8257-f4f312e980aa" (UID: "7c693d48-122b-44a7-8257-f4f312e980aa"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 00:11:26 crc kubenswrapper[5118]: I0121 00:11:26.590323 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-tlb84" Jan 21 00:11:26 crc kubenswrapper[5118]: I0121 00:11:26.607839 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c693d48-122b-44a7-8257-f4f312e980aa-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7c693d48-122b-44a7-8257-f4f312e980aa" (UID: "7c693d48-122b-44a7-8257-f4f312e980aa"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 00:11:26 crc kubenswrapper[5118]: I0121 00:11:26.608513 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6dca5dc9-b7f4-449c-ab11-734b2faf1de6-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "6dca5dc9-b7f4-449c-ab11-734b2faf1de6" (UID: "6dca5dc9-b7f4-449c-ab11-734b2faf1de6"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:11:26 crc kubenswrapper[5118]: I0121 00:11:26.609622 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c693d48-122b-44a7-8257-f4f312e980aa-kube-api-access-hn28v" (OuterVolumeSpecName: "kube-api-access-hn28v") pod "7c693d48-122b-44a7-8257-f4f312e980aa" (UID: "7c693d48-122b-44a7-8257-f4f312e980aa"). InnerVolumeSpecName "kube-api-access-hn28v". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:11:26 crc kubenswrapper[5118]: I0121 00:11:26.702016 5118 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7c693d48-122b-44a7-8257-f4f312e980aa-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 00:11:26 crc kubenswrapper[5118]: I0121 00:11:26.702043 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6dca5dc9-b7f4-449c-ab11-734b2faf1de6-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 00:11:26 crc kubenswrapper[5118]: I0121 00:11:26.702052 5118 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6dca5dc9-b7f4-449c-ab11-734b2faf1de6-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 21 00:11:26 crc kubenswrapper[5118]: I0121 00:11:26.702060 5118 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7c693d48-122b-44a7-8257-f4f312e980aa-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 00:11:26 crc kubenswrapper[5118]: I0121 00:11:26.702068 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hn28v\" (UniqueName: \"kubernetes.io/projected/7c693d48-122b-44a7-8257-f4f312e980aa-kube-api-access-hn28v\") on node \"crc\" DevicePath \"\"" Jan 21 00:11:26 crc kubenswrapper[5118]: I0121 00:11:26.990688 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e9b5059-1b3e-4067-a63d-2952cbe863af" path="/var/lib/kubelet/pods/9e9b5059-1b3e-4067-a63d-2952cbe863af/volumes" Jan 21 00:11:27 crc kubenswrapper[5118]: I0121 00:11:27.130014 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-tlb84"] Jan 21 00:11:27 crc kubenswrapper[5118]: I0121 00:11:27.153756 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 21 00:11:27 crc kubenswrapper[5118]: I0121 00:11:27.153790 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"6dca5dc9-b7f4-449c-ab11-734b2faf1de6","Type":"ContainerDied","Data":"3649f7cca0d41352c9a1badf9ae8c9c46528b8a1ad355be5e1b994ca7c2f313b"} Jan 21 00:11:27 crc kubenswrapper[5118]: I0121 00:11:27.154040 5118 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3649f7cca0d41352c9a1badf9ae8c9c46528b8a1ad355be5e1b994ca7c2f313b" Jan 21 00:11:27 crc kubenswrapper[5118]: I0121 00:11:27.167762 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-t9lqw" event={"ID":"8823ee71-944b-492d-8676-09a4f6e0103f","Type":"ContainerStarted","Data":"271157fb3a1a8314b96ab6d0d23a72354245c7efb3fb55833aa39c07eb4351c3"} Jan 21 00:11:27 crc kubenswrapper[5118]: I0121 00:11:27.167795 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-t9lqw" event={"ID":"8823ee71-944b-492d-8676-09a4f6e0103f","Type":"ContainerStarted","Data":"5752727f59956fb34a80fca08cfed316a079074ba513179fe73373ea17d265af"} Jan 21 00:11:27 crc kubenswrapper[5118]: I0121 00:11:27.178622 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482560-6bcnb" event={"ID":"7c693d48-122b-44a7-8257-f4f312e980aa","Type":"ContainerDied","Data":"87661eab5887c8dca9b9510d1ccb9739c635e2b63f448e500244abe8879d4ed8"} Jan 21 00:11:27 crc kubenswrapper[5118]: I0121 00:11:27.178665 5118 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="87661eab5887c8dca9b9510d1ccb9739c635e2b63f448e500244abe8879d4ed8" Jan 21 00:11:27 crc kubenswrapper[5118]: I0121 00:11:27.178681 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482560-6bcnb" Jan 21 00:11:27 crc kubenswrapper[5118]: I0121 00:11:27.189096 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-t9lqw" podStartSLOduration=16.189071234 podStartE2EDuration="16.189071234s" podCreationTimestamp="2026-01-21 00:11:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:11:27.184021466 +0000 UTC m=+142.508268504" watchObservedRunningTime="2026-01-21 00:11:27.189071234 +0000 UTC m=+142.513318252" Jan 21 00:11:27 crc kubenswrapper[5118]: I0121 00:11:27.944816 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Jan 21 00:11:27 crc kubenswrapper[5118]: I0121 00:11:27.945837 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6dca5dc9-b7f4-449c-ab11-734b2faf1de6" containerName="pruner" Jan 21 00:11:27 crc kubenswrapper[5118]: I0121 00:11:27.945852 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dca5dc9-b7f4-449c-ab11-734b2faf1de6" containerName="pruner" Jan 21 00:11:27 crc kubenswrapper[5118]: I0121 00:11:27.945861 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7c693d48-122b-44a7-8257-f4f312e980aa" containerName="collect-profiles" Jan 21 00:11:27 crc kubenswrapper[5118]: I0121 00:11:27.945868 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c693d48-122b-44a7-8257-f4f312e980aa" containerName="collect-profiles" Jan 21 00:11:27 crc kubenswrapper[5118]: I0121 00:11:27.945968 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="7c693d48-122b-44a7-8257-f4f312e980aa" containerName="collect-profiles" Jan 21 00:11:27 crc kubenswrapper[5118]: I0121 00:11:27.945986 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="6dca5dc9-b7f4-449c-ab11-734b2faf1de6" containerName="pruner" Jan 21 00:11:28 crc kubenswrapper[5118]: E0121 00:11:28.373404 5118 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1e6801320874cfa8e7840e091821e30de2bb270acdcb6fbda9f5cea2ffc991e6" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 21 00:11:28 crc kubenswrapper[5118]: E0121 00:11:28.374823 5118 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1e6801320874cfa8e7840e091821e30de2bb270acdcb6fbda9f5cea2ffc991e6" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 21 00:11:28 crc kubenswrapper[5118]: E0121 00:11:28.376859 5118 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1e6801320874cfa8e7840e091821e30de2bb270acdcb6fbda9f5cea2ffc991e6" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 21 00:11:28 crc kubenswrapper[5118]: E0121 00:11:28.376926 5118 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-g77tt" podUID="26d1a4fa-1469-4128-bd56-c9a122b28068" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Jan 21 00:11:28 crc kubenswrapper[5118]: I0121 00:11:28.801608 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-tlb84" event={"ID":"0d503143-f75b-40e6-b0e3-d1bd595a05ae","Type":"ContainerStarted","Data":"50af60c127f5188bd7aca1af9976285e835780921bf53a2f57465bd5da7f3a21"} Jan 21 00:11:28 crc kubenswrapper[5118]: I0121 00:11:28.801778 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Jan 21 00:11:28 crc kubenswrapper[5118]: I0121 00:11:28.801798 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-tlb84" event={"ID":"0d503143-f75b-40e6-b0e3-d1bd595a05ae","Type":"ContainerStarted","Data":"453e704f337a09cc1d6dd181cdc31ee5be695264ee553e6867ffe307fbfd48fc"} Jan 21 00:11:28 crc kubenswrapper[5118]: I0121 00:11:28.801994 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-66587d64c8-tlb84" Jan 21 00:11:28 crc kubenswrapper[5118]: I0121 00:11:28.802811 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 21 00:11:28 crc kubenswrapper[5118]: I0121 00:11:28.805793 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Jan 21 00:11:28 crc kubenswrapper[5118]: I0121 00:11:28.805993 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Jan 21 00:11:28 crc kubenswrapper[5118]: I0121 00:11:28.825438 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66587d64c8-tlb84" podStartSLOduration=122.825421662 podStartE2EDuration="2m2.825421662s" podCreationTimestamp="2026-01-21 00:09:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:11:28.82453393 +0000 UTC m=+144.148780958" watchObservedRunningTime="2026-01-21 00:11:28.825421662 +0000 UTC m=+144.149668680" Jan 21 00:11:28 crc kubenswrapper[5118]: I0121 00:11:28.939850 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/215f13ff-cdcf-43b0-9278-e1833f612ae9-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"215f13ff-cdcf-43b0-9278-e1833f612ae9\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 21 00:11:28 crc kubenswrapper[5118]: I0121 00:11:28.939963 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/215f13ff-cdcf-43b0-9278-e1833f612ae9-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"215f13ff-cdcf-43b0-9278-e1833f612ae9\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 21 00:11:29 crc kubenswrapper[5118]: I0121 00:11:29.040802 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/215f13ff-cdcf-43b0-9278-e1833f612ae9-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"215f13ff-cdcf-43b0-9278-e1833f612ae9\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 21 00:11:29 crc kubenswrapper[5118]: I0121 00:11:29.040987 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/215f13ff-cdcf-43b0-9278-e1833f612ae9-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"215f13ff-cdcf-43b0-9278-e1833f612ae9\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 21 00:11:29 crc kubenswrapper[5118]: I0121 00:11:29.041068 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/215f13ff-cdcf-43b0-9278-e1833f612ae9-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"215f13ff-cdcf-43b0-9278-e1833f612ae9\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 21 00:11:29 crc kubenswrapper[5118]: I0121 00:11:29.061737 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/215f13ff-cdcf-43b0-9278-e1833f612ae9-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"215f13ff-cdcf-43b0-9278-e1833f612ae9\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 21 00:11:29 crc kubenswrapper[5118]: I0121 00:11:29.124329 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 21 00:11:29 crc kubenswrapper[5118]: I0121 00:11:29.363745 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-747b44746d-pdh68" Jan 21 00:11:29 crc kubenswrapper[5118]: I0121 00:11:29.463063 5118 ???:1] "http: TLS handshake error from 192.168.126.11:51796: no serving certificate available for the kubelet" Jan 21 00:11:31 crc kubenswrapper[5118]: I0121 00:11:31.512251 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-h8fs2" Jan 21 00:11:31 crc kubenswrapper[5118]: I0121 00:11:31.879634 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-jcv4b" Jan 21 00:11:35 crc kubenswrapper[5118]: I0121 00:11:35.416025 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-64d44f6ddf-xbtg4" Jan 21 00:11:35 crc kubenswrapper[5118]: I0121 00:11:35.421389 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-64d44f6ddf-xbtg4" Jan 21 00:11:36 crc kubenswrapper[5118]: I0121 00:11:36.890363 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 00:11:36 crc kubenswrapper[5118]: I0121 00:11:36.890705 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 00:11:36 crc kubenswrapper[5118]: I0121 00:11:36.898811 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 00:11:36 crc kubenswrapper[5118]: I0121 00:11:36.992379 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 21 00:11:36 crc kubenswrapper[5118]: I0121 00:11:36.992499 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 21 00:11:36 crc kubenswrapper[5118]: I0121 00:11:36.996733 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 21 00:11:36 crc kubenswrapper[5118]: I0121 00:11:36.996886 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 21 00:11:37 crc kubenswrapper[5118]: I0121 00:11:37.096847 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 21 00:11:37 crc kubenswrapper[5118]: I0121 00:11:37.103635 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 21 00:11:37 crc kubenswrapper[5118]: I0121 00:11:37.120710 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 00:11:37 crc kubenswrapper[5118]: I0121 00:11:37.517094 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 00:11:37 crc kubenswrapper[5118]: I0121 00:11:37.524945 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/21105fbf-0225-4ba6-ba90-17808d5250c6-metrics-certs\") pod \"network-metrics-daemon-9hvtf\" (UID: \"21105fbf-0225-4ba6-ba90-17808d5250c6\") " pod="openshift-multus/network-metrics-daemon-9hvtf" Jan 21 00:11:37 crc kubenswrapper[5118]: I0121 00:11:37.531285 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/21105fbf-0225-4ba6-ba90-17808d5250c6-metrics-certs\") pod \"network-metrics-daemon-9hvtf\" (UID: \"21105fbf-0225-4ba6-ba90-17808d5250c6\") " pod="openshift-multus/network-metrics-daemon-9hvtf" Jan 21 00:11:37 crc kubenswrapper[5118]: I0121 00:11:37.817838 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9hvtf" Jan 21 00:11:38 crc kubenswrapper[5118]: E0121 00:11:38.372882 5118 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1e6801320874cfa8e7840e091821e30de2bb270acdcb6fbda9f5cea2ffc991e6" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 21 00:11:38 crc kubenswrapper[5118]: E0121 00:11:38.374265 5118 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1e6801320874cfa8e7840e091821e30de2bb270acdcb6fbda9f5cea2ffc991e6" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 21 00:11:38 crc kubenswrapper[5118]: E0121 00:11:38.376544 5118 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1e6801320874cfa8e7840e091821e30de2bb270acdcb6fbda9f5cea2ffc991e6" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 21 00:11:38 crc kubenswrapper[5118]: E0121 00:11:38.376583 5118 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-g77tt" podUID="26d1a4fa-1469-4128-bd56-c9a122b28068" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Jan 21 00:11:39 crc kubenswrapper[5118]: I0121 00:11:39.727846 5118 ???:1] "http: TLS handshake error from 192.168.126.11:43202: no serving certificate available for the kubelet" Jan 21 00:11:48 crc kubenswrapper[5118]: E0121 00:11:48.372969 5118 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1e6801320874cfa8e7840e091821e30de2bb270acdcb6fbda9f5cea2ffc991e6" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 21 00:11:48 crc kubenswrapper[5118]: E0121 00:11:48.374816 5118 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1e6801320874cfa8e7840e091821e30de2bb270acdcb6fbda9f5cea2ffc991e6" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 21 00:11:48 crc kubenswrapper[5118]: E0121 00:11:48.376111 5118 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1e6801320874cfa8e7840e091821e30de2bb270acdcb6fbda9f5cea2ffc991e6" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 21 00:11:48 crc kubenswrapper[5118]: E0121 00:11:48.376150 5118 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-g77tt" podUID="26d1a4fa-1469-4128-bd56-c9a122b28068" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Jan 21 00:11:49 crc kubenswrapper[5118]: I0121 00:11:49.199927 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66587d64c8-tlb84" Jan 21 00:11:52 crc kubenswrapper[5118]: I0121 00:11:52.269492 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-8zvdp" Jan 21 00:11:52 crc kubenswrapper[5118]: I0121 00:11:52.635791 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-g77tt_26d1a4fa-1469-4128-bd56-c9a122b28068/kube-multus-additional-cni-plugins/0.log" Jan 21 00:11:52 crc kubenswrapper[5118]: I0121 00:11:52.635841 5118 generic.go:358] "Generic (PLEG): container finished" podID="26d1a4fa-1469-4128-bd56-c9a122b28068" containerID="1e6801320874cfa8e7840e091821e30de2bb270acdcb6fbda9f5cea2ffc991e6" exitCode=137 Jan 21 00:11:52 crc kubenswrapper[5118]: I0121 00:11:52.635974 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-g77tt" event={"ID":"26d1a4fa-1469-4128-bd56-c9a122b28068","Type":"ContainerDied","Data":"1e6801320874cfa8e7840e091821e30de2bb270acdcb6fbda9f5cea2ffc991e6"} Jan 21 00:11:54 crc kubenswrapper[5118]: I0121 00:11:54.773310 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-g77tt_26d1a4fa-1469-4128-bd56-c9a122b28068/kube-multus-additional-cni-plugins/0.log" Jan 21 00:11:54 crc kubenswrapper[5118]: I0121 00:11:54.774017 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-g77tt" Jan 21 00:11:54 crc kubenswrapper[5118]: I0121 00:11:54.813754 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wchdj\" (UniqueName: \"kubernetes.io/projected/26d1a4fa-1469-4128-bd56-c9a122b28068-kube-api-access-wchdj\") pod \"26d1a4fa-1469-4128-bd56-c9a122b28068\" (UID: \"26d1a4fa-1469-4128-bd56-c9a122b28068\") " Jan 21 00:11:54 crc kubenswrapper[5118]: I0121 00:11:54.813814 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/26d1a4fa-1469-4128-bd56-c9a122b28068-tuning-conf-dir\") pod \"26d1a4fa-1469-4128-bd56-c9a122b28068\" (UID: \"26d1a4fa-1469-4128-bd56-c9a122b28068\") " Jan 21 00:11:54 crc kubenswrapper[5118]: I0121 00:11:54.813867 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/26d1a4fa-1469-4128-bd56-c9a122b28068-ready\") pod \"26d1a4fa-1469-4128-bd56-c9a122b28068\" (UID: \"26d1a4fa-1469-4128-bd56-c9a122b28068\") " Jan 21 00:11:54 crc kubenswrapper[5118]: I0121 00:11:54.813912 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/26d1a4fa-1469-4128-bd56-c9a122b28068-cni-sysctl-allowlist\") pod \"26d1a4fa-1469-4128-bd56-c9a122b28068\" (UID: \"26d1a4fa-1469-4128-bd56-c9a122b28068\") " Jan 21 00:11:54 crc kubenswrapper[5118]: I0121 00:11:54.814094 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/26d1a4fa-1469-4128-bd56-c9a122b28068-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "26d1a4fa-1469-4128-bd56-c9a122b28068" (UID: "26d1a4fa-1469-4128-bd56-c9a122b28068"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 00:11:54 crc kubenswrapper[5118]: I0121 00:11:54.814723 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26d1a4fa-1469-4128-bd56-c9a122b28068-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "26d1a4fa-1469-4128-bd56-c9a122b28068" (UID: "26d1a4fa-1469-4128-bd56-c9a122b28068"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 00:11:54 crc kubenswrapper[5118]: I0121 00:11:54.814925 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26d1a4fa-1469-4128-bd56-c9a122b28068-ready" (OuterVolumeSpecName: "ready") pod "26d1a4fa-1469-4128-bd56-c9a122b28068" (UID: "26d1a4fa-1469-4128-bd56-c9a122b28068"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 00:11:54 crc kubenswrapper[5118]: I0121 00:11:54.853384 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26d1a4fa-1469-4128-bd56-c9a122b28068-kube-api-access-wchdj" (OuterVolumeSpecName: "kube-api-access-wchdj") pod "26d1a4fa-1469-4128-bd56-c9a122b28068" (UID: "26d1a4fa-1469-4128-bd56-c9a122b28068"). InnerVolumeSpecName "kube-api-access-wchdj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:11:54 crc kubenswrapper[5118]: I0121 00:11:54.916523 5118 reconciler_common.go:299] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/26d1a4fa-1469-4128-bd56-c9a122b28068-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 21 00:11:54 crc kubenswrapper[5118]: I0121 00:11:54.916886 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wchdj\" (UniqueName: \"kubernetes.io/projected/26d1a4fa-1469-4128-bd56-c9a122b28068-kube-api-access-wchdj\") on node \"crc\" DevicePath \"\"" Jan 21 00:11:54 crc kubenswrapper[5118]: I0121 00:11:54.916896 5118 reconciler_common.go:299] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/26d1a4fa-1469-4128-bd56-c9a122b28068-tuning-conf-dir\") on node \"crc\" DevicePath \"\"" Jan 21 00:11:54 crc kubenswrapper[5118]: I0121 00:11:54.916904 5118 reconciler_common.go:299] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/26d1a4fa-1469-4128-bd56-c9a122b28068-ready\") on node \"crc\" DevicePath \"\"" Jan 21 00:11:55 crc kubenswrapper[5118]: I0121 00:11:55.438267 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Jan 21 00:11:55 crc kubenswrapper[5118]: I0121 00:11:55.452301 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-9hvtf"] Jan 21 00:11:55 crc kubenswrapper[5118]: I0121 00:11:55.656224 5118 generic.go:358] "Generic (PLEG): container finished" podID="3ee59881-8e70-4769-b92d-5df34a2b9130" containerID="736538107d9a3bd0dcd131d07877a6353bec420c76db6a5f3774e2e2a504fbbf" exitCode=0 Jan 21 00:11:55 crc kubenswrapper[5118]: I0121 00:11:55.656298 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vp4hf" event={"ID":"3ee59881-8e70-4769-b92d-5df34a2b9130","Type":"ContainerDied","Data":"736538107d9a3bd0dcd131d07877a6353bec420c76db6a5f3774e2e2a504fbbf"} Jan 21 00:11:55 crc kubenswrapper[5118]: I0121 00:11:55.659900 5118 generic.go:358] "Generic (PLEG): container finished" podID="d18361c6-e5b6-44f0-b6d4-4dae1ff8c741" containerID="8fdc2b2cd5aeb98eeccb724ed45fa038952a7530fb2b905459918a2949fc02b3" exitCode=0 Jan 21 00:11:55 crc kubenswrapper[5118]: I0121 00:11:55.660021 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4frhj" event={"ID":"d18361c6-e5b6-44f0-b6d4-4dae1ff8c741","Type":"ContainerDied","Data":"8fdc2b2cd5aeb98eeccb724ed45fa038952a7530fb2b905459918a2949fc02b3"} Jan 21 00:11:55 crc kubenswrapper[5118]: I0121 00:11:55.661641 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-g77tt_26d1a4fa-1469-4128-bd56-c9a122b28068/kube-multus-additional-cni-plugins/0.log" Jan 21 00:11:55 crc kubenswrapper[5118]: I0121 00:11:55.661789 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-g77tt" Jan 21 00:11:55 crc kubenswrapper[5118]: I0121 00:11:55.661823 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-g77tt" event={"ID":"26d1a4fa-1469-4128-bd56-c9a122b28068","Type":"ContainerDied","Data":"b17a1bc2007aef7c810b2739ebfbd9a82c6711e46c827105e7ec831963ed0a27"} Jan 21 00:11:55 crc kubenswrapper[5118]: I0121 00:11:55.661863 5118 scope.go:117] "RemoveContainer" containerID="1e6801320874cfa8e7840e091821e30de2bb270acdcb6fbda9f5cea2ffc991e6" Jan 21 00:11:55 crc kubenswrapper[5118]: I0121 00:11:55.663513 5118 generic.go:358] "Generic (PLEG): container finished" podID="28172373-ad9f-4755-a060-b467a2817a67" containerID="1bf31aadce08469a5109a87ba9ef45d3ddc653963f95982cc4574c34fdb52d36" exitCode=0 Jan 21 00:11:55 crc kubenswrapper[5118]: I0121 00:11:55.663608 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6c5wr" event={"ID":"28172373-ad9f-4755-a060-b467a2817a67","Type":"ContainerDied","Data":"1bf31aadce08469a5109a87ba9ef45d3ddc653963f95982cc4574c34fdb52d36"} Jan 21 00:11:55 crc kubenswrapper[5118]: I0121 00:11:55.666113 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s6c4w" event={"ID":"f20612e2-22cd-486f-b881-af82d40bd144","Type":"ContainerStarted","Data":"6dbb779b1531e72003ebd601e9389f21f87b86a67ab62411fe8585d7689e8d31"} Jan 21 00:11:55 crc kubenswrapper[5118]: I0121 00:11:55.722430 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" event={"ID":"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141","Type":"ContainerStarted","Data":"d762455ab088480abfb01f9a1f7d9198ab8cef611e6591a9540804dd74911a1a"} Jan 21 00:11:55 crc kubenswrapper[5118]: I0121 00:11:55.732400 5118 generic.go:358] "Generic (PLEG): container finished" podID="5346f11a-11bb-4650-8de5-7988e8cb2bba" containerID="1f4f5398d1d647dea6419a97362d216c1c8c24ec582354d40541ac1c8dfaf9ad" exitCode=0 Jan 21 00:11:55 crc kubenswrapper[5118]: I0121 00:11:55.732483 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mlmpf" event={"ID":"5346f11a-11bb-4650-8de5-7988e8cb2bba","Type":"ContainerDied","Data":"1f4f5398d1d647dea6419a97362d216c1c8c24ec582354d40541ac1c8dfaf9ad"} Jan 21 00:11:55 crc kubenswrapper[5118]: I0121 00:11:55.738535 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" event={"ID":"17b87002-b798-480a-8e17-83053d698239","Type":"ContainerStarted","Data":"fe52718ca6708e386147f44b92aa02ed1110a90efc2a788ed02ee839c10ff290"} Jan 21 00:11:55 crc kubenswrapper[5118]: I0121 00:11:55.738586 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" event={"ID":"17b87002-b798-480a-8e17-83053d698239","Type":"ContainerStarted","Data":"f22d4bb1a9fd21f9e6c678a2c20f9429429d7430a37ee651430b14497b2cdf20"} Jan 21 00:11:55 crc kubenswrapper[5118]: I0121 00:11:55.745923 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 21 00:11:55 crc kubenswrapper[5118]: I0121 00:11:55.867433 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-94zfv" event={"ID":"8d65d512-7c64-462f-b40d-ad0252a88233","Type":"ContainerStarted","Data":"064e7880e54045b59932d7093dd755638be2dbfbe989d221424a43ec505c91e1"} Jan 21 00:11:55 crc kubenswrapper[5118]: I0121 00:11:55.873534 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jssxn" event={"ID":"a4b72480-d05d-4b1a-9b30-0d3e80ea6249","Type":"ContainerStarted","Data":"66f553be86b261ed50480cb7b063522bab219fbb6e2153941caa9305a30aa6b1"} Jan 21 00:11:55 crc kubenswrapper[5118]: I0121 00:11:55.876562 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s5pql" event={"ID":"67f8120d-af6d-4e77-9772-0fc55dfad0bf","Type":"ContainerStarted","Data":"1a0da28383c155e574efb1373c98b8cd787809b3ef0e2e9d3ff83f06684bb4fb"} Jan 21 00:11:55 crc kubenswrapper[5118]: I0121 00:11:55.878418 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" event={"ID":"f863fff9-286a-45fa-b8f0-8a86994b8440","Type":"ContainerStarted","Data":"906654c2bc03c840f24bff290ae932c63bdce0c48a64c102afa4d01d1a75a774"} Jan 21 00:11:56 crc kubenswrapper[5118]: W0121 00:11:56.190903 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod21105fbf_0225_4ba6_ba90_17808d5250c6.slice/crio-7ab713ba739f22957d00e1d851a960a1a4a4d1f96ce0cdd165d56f5dd6b8b20d WatchSource:0}: Error finding container 7ab713ba739f22957d00e1d851a960a1a4a4d1f96ce0cdd165d56f5dd6b8b20d: Status 404 returned error can't find the container with id 7ab713ba739f22957d00e1d851a960a1a4a4d1f96ce0cdd165d56f5dd6b8b20d Jan 21 00:11:56 crc kubenswrapper[5118]: I0121 00:11:56.885699 5118 generic.go:358] "Generic (PLEG): container finished" podID="f20612e2-22cd-486f-b881-af82d40bd144" containerID="6dbb779b1531e72003ebd601e9389f21f87b86a67ab62411fe8585d7689e8d31" exitCode=0 Jan 21 00:11:56 crc kubenswrapper[5118]: I0121 00:11:56.885882 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s6c4w" event={"ID":"f20612e2-22cd-486f-b881-af82d40bd144","Type":"ContainerDied","Data":"6dbb779b1531e72003ebd601e9389f21f87b86a67ab62411fe8585d7689e8d31"} Jan 21 00:11:56 crc kubenswrapper[5118]: I0121 00:11:56.891714 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-9hvtf" event={"ID":"21105fbf-0225-4ba6-ba90-17808d5250c6","Type":"ContainerStarted","Data":"7ab713ba739f22957d00e1d851a960a1a4a4d1f96ce0cdd165d56f5dd6b8b20d"} Jan 21 00:11:56 crc kubenswrapper[5118]: I0121 00:11:56.895818 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"215f13ff-cdcf-43b0-9278-e1833f612ae9","Type":"ContainerStarted","Data":"0a9882bf58c4cb321eb11797b2ad9181d88bb13d8f96939f2c7b302af6f54313"} Jan 21 00:11:57 crc kubenswrapper[5118]: I0121 00:11:57.152432 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-g77tt"] Jan 21 00:11:57 crc kubenswrapper[5118]: I0121 00:11:57.158019 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-g77tt"] Jan 21 00:11:57 crc kubenswrapper[5118]: I0121 00:11:57.901728 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6c5wr" event={"ID":"28172373-ad9f-4755-a060-b467a2817a67","Type":"ContainerStarted","Data":"71f5cae005056bcf6ab18ef5f90f6f3cfd163f83065921bc0194f42eb2ee2571"} Jan 21 00:11:57 crc kubenswrapper[5118]: I0121 00:11:57.905317 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s6c4w" event={"ID":"f20612e2-22cd-486f-b881-af82d40bd144","Type":"ContainerStarted","Data":"9057b8184e7621c25bf1204803768b6924991fbb94d64cf36b670df60242f816"} Jan 21 00:11:57 crc kubenswrapper[5118]: I0121 00:11:57.906998 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" event={"ID":"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141","Type":"ContainerStarted","Data":"a201d70b858b4109dfb354d0fd270bfa712d53fc650f6dd2d87dfad2b229dd72"} Jan 21 00:11:57 crc kubenswrapper[5118]: I0121 00:11:57.909342 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mlmpf" event={"ID":"5346f11a-11bb-4650-8de5-7988e8cb2bba","Type":"ContainerStarted","Data":"f6aa4fc4d80c9e9a954c43c2f176c793f1551616871f70ee447a92a1ad677cc2"} Jan 21 00:11:57 crc kubenswrapper[5118]: I0121 00:11:57.913106 5118 generic.go:358] "Generic (PLEG): container finished" podID="a4b72480-d05d-4b1a-9b30-0d3e80ea6249" containerID="66f553be86b261ed50480cb7b063522bab219fbb6e2153941caa9305a30aa6b1" exitCode=0 Jan 21 00:11:57 crc kubenswrapper[5118]: I0121 00:11:57.913229 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jssxn" event={"ID":"a4b72480-d05d-4b1a-9b30-0d3e80ea6249","Type":"ContainerDied","Data":"66f553be86b261ed50480cb7b063522bab219fbb6e2153941caa9305a30aa6b1"} Jan 21 00:11:57 crc kubenswrapper[5118]: I0121 00:11:57.917761 5118 generic.go:358] "Generic (PLEG): container finished" podID="67f8120d-af6d-4e77-9772-0fc55dfad0bf" containerID="1a0da28383c155e574efb1373c98b8cd787809b3ef0e2e9d3ff83f06684bb4fb" exitCode=0 Jan 21 00:11:57 crc kubenswrapper[5118]: I0121 00:11:57.917890 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s5pql" event={"ID":"67f8120d-af6d-4e77-9772-0fc55dfad0bf","Type":"ContainerDied","Data":"1a0da28383c155e574efb1373c98b8cd787809b3ef0e2e9d3ff83f06684bb4fb"} Jan 21 00:11:57 crc kubenswrapper[5118]: I0121 00:11:57.920205 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6c5wr" podStartSLOduration=6.008729273 podStartE2EDuration="37.920187254s" podCreationTimestamp="2026-01-21 00:11:20 +0000 UTC" firstStartedPulling="2026-01-21 00:11:22.752283708 +0000 UTC m=+138.076530726" lastFinishedPulling="2026-01-21 00:11:54.663741689 +0000 UTC m=+169.987988707" observedRunningTime="2026-01-21 00:11:57.919514427 +0000 UTC m=+173.243761455" watchObservedRunningTime="2026-01-21 00:11:57.920187254 +0000 UTC m=+173.244434292" Jan 21 00:11:57 crc kubenswrapper[5118]: I0121 00:11:57.923330 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" event={"ID":"f863fff9-286a-45fa-b8f0-8a86994b8440","Type":"ContainerStarted","Data":"6ef62f73c86b7afd11200c1e41d8840689182df1e61c82dfa80fcfd5632f0e47"} Jan 21 00:11:57 crc kubenswrapper[5118]: I0121 00:11:57.930082 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-9hvtf" event={"ID":"21105fbf-0225-4ba6-ba90-17808d5250c6","Type":"ContainerStarted","Data":"61d3d0f56d90b6e8a9316658fb3f98140b25d642f240a1dc9f0b0e8740b986e1"} Jan 21 00:11:57 crc kubenswrapper[5118]: I0121 00:11:57.931506 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"215f13ff-cdcf-43b0-9278-e1833f612ae9","Type":"ContainerStarted","Data":"bb29f92438fc01ff507cdfce63ddb408d032f8a7c5dc31e6d512e9526ca6e0ed"} Jan 21 00:11:57 crc kubenswrapper[5118]: I0121 00:11:57.933646 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vp4hf" event={"ID":"3ee59881-8e70-4769-b92d-5df34a2b9130","Type":"ContainerStarted","Data":"e382babedd3773e22c31ea3b3da3339660ba3ba15a4a501c197b2a2008acc140"} Jan 21 00:11:57 crc kubenswrapper[5118]: I0121 00:11:57.937077 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4frhj" event={"ID":"d18361c6-e5b6-44f0-b6d4-4dae1ff8c741","Type":"ContainerStarted","Data":"b612f5c41ec000be28e4c9c9896ae76218539305be25a1fd8cd030aa10e11d17"} Jan 21 00:11:57 crc kubenswrapper[5118]: I0121 00:11:57.940913 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mlmpf" podStartSLOduration=5.301887041 podStartE2EDuration="34.940893589s" podCreationTimestamp="2026-01-21 00:11:23 +0000 UTC" firstStartedPulling="2026-01-21 00:11:25.055514232 +0000 UTC m=+140.379761250" lastFinishedPulling="2026-01-21 00:11:54.69452078 +0000 UTC m=+170.018767798" observedRunningTime="2026-01-21 00:11:57.938352255 +0000 UTC m=+173.262599293" watchObservedRunningTime="2026-01-21 00:11:57.940893589 +0000 UTC m=+173.265140607" Jan 21 00:11:57 crc kubenswrapper[5118]: I0121 00:11:57.985251 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-s6c4w" podStartSLOduration=6.175291204 podStartE2EDuration="36.985231944s" podCreationTimestamp="2026-01-21 00:11:21 +0000 UTC" firstStartedPulling="2026-01-21 00:11:23.927509119 +0000 UTC m=+139.251756137" lastFinishedPulling="2026-01-21 00:11:54.737449859 +0000 UTC m=+170.061696877" observedRunningTime="2026-01-21 00:11:57.984261349 +0000 UTC m=+173.308508377" watchObservedRunningTime="2026-01-21 00:11:57.985231944 +0000 UTC m=+173.309478972" Jan 21 00:11:58 crc kubenswrapper[5118]: I0121 00:11:58.147838 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-11-crc" podStartSLOduration=31.147819468 podStartE2EDuration="31.147819468s" podCreationTimestamp="2026-01-21 00:11:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:11:58.146592937 +0000 UTC m=+173.470839975" watchObservedRunningTime="2026-01-21 00:11:58.147819468 +0000 UTC m=+173.472066486" Jan 21 00:11:58 crc kubenswrapper[5118]: I0121 00:11:58.171098 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4frhj" podStartSLOduration=6.51192825 podStartE2EDuration="36.171080448s" podCreationTimestamp="2026-01-21 00:11:22 +0000 UTC" firstStartedPulling="2026-01-21 00:11:25.101852658 +0000 UTC m=+140.426099676" lastFinishedPulling="2026-01-21 00:11:54.761004856 +0000 UTC m=+170.085251874" observedRunningTime="2026-01-21 00:11:58.168527043 +0000 UTC m=+173.492774071" watchObservedRunningTime="2026-01-21 00:11:58.171080448 +0000 UTC m=+173.495327466" Jan 21 00:11:58 crc kubenswrapper[5118]: I0121 00:11:58.195968 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vp4hf" podStartSLOduration=6.22502769 podStartE2EDuration="38.195948549s" podCreationTimestamp="2026-01-21 00:11:20 +0000 UTC" firstStartedPulling="2026-01-21 00:11:22.749335703 +0000 UTC m=+138.073582721" lastFinishedPulling="2026-01-21 00:11:54.720256552 +0000 UTC m=+170.044503580" observedRunningTime="2026-01-21 00:11:58.186998102 +0000 UTC m=+173.511245120" watchObservedRunningTime="2026-01-21 00:11:58.195948549 +0000 UTC m=+173.520195567" Jan 21 00:11:58 crc kubenswrapper[5118]: I0121 00:11:58.944970 5118 generic.go:358] "Generic (PLEG): container finished" podID="8d65d512-7c64-462f-b40d-ad0252a88233" containerID="064e7880e54045b59932d7093dd755638be2dbfbe989d221424a43ec505c91e1" exitCode=0 Jan 21 00:11:58 crc kubenswrapper[5118]: I0121 00:11:58.945332 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-94zfv" event={"ID":"8d65d512-7c64-462f-b40d-ad0252a88233","Type":"ContainerDied","Data":"064e7880e54045b59932d7093dd755638be2dbfbe989d221424a43ec505c91e1"} Jan 21 00:11:58 crc kubenswrapper[5118]: I0121 00:11:58.949031 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jssxn" event={"ID":"a4b72480-d05d-4b1a-9b30-0d3e80ea6249","Type":"ContainerStarted","Data":"cb0bf453cfba4b585f0e472106c21d7f07b33bcf62d3b1705980f26fa79a0dbb"} Jan 21 00:11:58 crc kubenswrapper[5118]: I0121 00:11:58.953218 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s5pql" event={"ID":"67f8120d-af6d-4e77-9772-0fc55dfad0bf","Type":"ContainerStarted","Data":"00fe3c213943c94d49f9c07ebca5629df4b6ccb2fab0e8cfe73f67a6fda85a09"} Jan 21 00:11:58 crc kubenswrapper[5118]: I0121 00:11:58.957741 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-9hvtf" event={"ID":"21105fbf-0225-4ba6-ba90-17808d5250c6","Type":"ContainerStarted","Data":"5e6e2b2b51fc146d4af317bb67d915740d3d283df726d68988a7b2de7adf9e82"} Jan 21 00:11:58 crc kubenswrapper[5118]: I0121 00:11:58.959021 5118 generic.go:358] "Generic (PLEG): container finished" podID="215f13ff-cdcf-43b0-9278-e1833f612ae9" containerID="bb29f92438fc01ff507cdfce63ddb408d032f8a7c5dc31e6d512e9526ca6e0ed" exitCode=0 Jan 21 00:11:58 crc kubenswrapper[5118]: I0121 00:11:58.959148 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"215f13ff-cdcf-43b0-9278-e1833f612ae9","Type":"ContainerDied","Data":"bb29f92438fc01ff507cdfce63ddb408d032f8a7c5dc31e6d512e9526ca6e0ed"} Jan 21 00:11:58 crc kubenswrapper[5118]: I0121 00:11:58.986683 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26d1a4fa-1469-4128-bd56-c9a122b28068" path="/var/lib/kubelet/pods/26d1a4fa-1469-4128-bd56-c9a122b28068/volumes" Jan 21 00:11:59 crc kubenswrapper[5118]: I0121 00:11:59.027276 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jssxn" podStartSLOduration=6.382651974 podStartE2EDuration="35.027257787s" podCreationTimestamp="2026-01-21 00:11:24 +0000 UTC" firstStartedPulling="2026-01-21 00:11:26.120481277 +0000 UTC m=+141.444728295" lastFinishedPulling="2026-01-21 00:11:54.76508709 +0000 UTC m=+170.089334108" observedRunningTime="2026-01-21 00:11:59.025918153 +0000 UTC m=+174.350165171" watchObservedRunningTime="2026-01-21 00:11:59.027257787 +0000 UTC m=+174.351504805" Jan 21 00:11:59 crc kubenswrapper[5118]: I0121 00:11:59.310337 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-9hvtf" podStartSLOduration=153.310317687 podStartE2EDuration="2m33.310317687s" podCreationTimestamp="2026-01-21 00:09:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:11:59.305756271 +0000 UTC m=+174.630003309" watchObservedRunningTime="2026-01-21 00:11:59.310317687 +0000 UTC m=+174.634564715" Jan 21 00:11:59 crc kubenswrapper[5118]: I0121 00:11:59.336593 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-s5pql" podStartSLOduration=7.450966108 podStartE2EDuration="39.336572583s" podCreationTimestamp="2026-01-21 00:11:20 +0000 UTC" firstStartedPulling="2026-01-21 00:11:22.875914514 +0000 UTC m=+138.200161542" lastFinishedPulling="2026-01-21 00:11:54.761520999 +0000 UTC m=+170.085768017" observedRunningTime="2026-01-21 00:11:59.334240764 +0000 UTC m=+174.658487812" watchObservedRunningTime="2026-01-21 00:11:59.336572583 +0000 UTC m=+174.660819601" Jan 21 00:11:59 crc kubenswrapper[5118]: I0121 00:11:59.985167 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-94zfv" event={"ID":"8d65d512-7c64-462f-b40d-ad0252a88233","Type":"ContainerStarted","Data":"532226610bc3672e54c1a84b2f208a59250e8c5d82ca78ebcde74863bae441d3"} Jan 21 00:12:00 crc kubenswrapper[5118]: I0121 00:12:00.003634 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-94zfv" podStartSLOduration=7.380071679 podStartE2EDuration="37.003614584s" podCreationTimestamp="2026-01-21 00:11:23 +0000 UTC" firstStartedPulling="2026-01-21 00:11:25.084949819 +0000 UTC m=+140.409196837" lastFinishedPulling="2026-01-21 00:11:54.708492724 +0000 UTC m=+170.032739742" observedRunningTime="2026-01-21 00:12:00.002215288 +0000 UTC m=+175.326462316" watchObservedRunningTime="2026-01-21 00:12:00.003614584 +0000 UTC m=+175.327861602" Jan 21 00:12:00 crc kubenswrapper[5118]: I0121 00:12:00.250976 5118 ???:1] "http: TLS handshake error from 192.168.126.11:41938: no serving certificate available for the kubelet" Jan 21 00:12:00 crc kubenswrapper[5118]: I0121 00:12:00.285704 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 21 00:12:00 crc kubenswrapper[5118]: I0121 00:12:00.338349 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/215f13ff-cdcf-43b0-9278-e1833f612ae9-kube-api-access\") pod \"215f13ff-cdcf-43b0-9278-e1833f612ae9\" (UID: \"215f13ff-cdcf-43b0-9278-e1833f612ae9\") " Jan 21 00:12:00 crc kubenswrapper[5118]: I0121 00:12:00.338537 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/215f13ff-cdcf-43b0-9278-e1833f612ae9-kubelet-dir\") pod \"215f13ff-cdcf-43b0-9278-e1833f612ae9\" (UID: \"215f13ff-cdcf-43b0-9278-e1833f612ae9\") " Jan 21 00:12:00 crc kubenswrapper[5118]: I0121 00:12:00.338649 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/215f13ff-cdcf-43b0-9278-e1833f612ae9-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "215f13ff-cdcf-43b0-9278-e1833f612ae9" (UID: "215f13ff-cdcf-43b0-9278-e1833f612ae9"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 00:12:00 crc kubenswrapper[5118]: I0121 00:12:00.338986 5118 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/215f13ff-cdcf-43b0-9278-e1833f612ae9-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 21 00:12:00 crc kubenswrapper[5118]: I0121 00:12:00.348410 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/215f13ff-cdcf-43b0-9278-e1833f612ae9-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "215f13ff-cdcf-43b0-9278-e1833f612ae9" (UID: "215f13ff-cdcf-43b0-9278-e1833f612ae9"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:12:00 crc kubenswrapper[5118]: I0121 00:12:00.440103 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/215f13ff-cdcf-43b0-9278-e1833f612ae9-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 00:12:00 crc kubenswrapper[5118]: I0121 00:12:00.837522 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6c5wr" Jan 21 00:12:00 crc kubenswrapper[5118]: I0121 00:12:00.837577 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-6c5wr" Jan 21 00:12:00 crc kubenswrapper[5118]: I0121 00:12:00.922660 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6c5wr" Jan 21 00:12:00 crc kubenswrapper[5118]: I0121 00:12:00.994261 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"215f13ff-cdcf-43b0-9278-e1833f612ae9","Type":"ContainerDied","Data":"0a9882bf58c4cb321eb11797b2ad9181d88bb13d8f96939f2c7b302af6f54313"} Jan 21 00:12:00 crc kubenswrapper[5118]: I0121 00:12:00.994289 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 21 00:12:00 crc kubenswrapper[5118]: I0121 00:12:00.994304 5118 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a9882bf58c4cb321eb11797b2ad9181d88bb13d8f96939f2c7b302af6f54313" Jan 21 00:12:01 crc kubenswrapper[5118]: I0121 00:12:01.008784 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-s5pql" Jan 21 00:12:01 crc kubenswrapper[5118]: I0121 00:12:01.008826 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-s5pql" Jan 21 00:12:01 crc kubenswrapper[5118]: I0121 00:12:01.060428 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-s5pql" Jan 21 00:12:01 crc kubenswrapper[5118]: I0121 00:12:01.259365 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-vp4hf" Jan 21 00:12:01 crc kubenswrapper[5118]: I0121 00:12:01.259420 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vp4hf" Jan 21 00:12:01 crc kubenswrapper[5118]: I0121 00:12:01.300291 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vp4hf" Jan 21 00:12:01 crc kubenswrapper[5118]: I0121 00:12:01.625649 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-s6c4w" Jan 21 00:12:01 crc kubenswrapper[5118]: I0121 00:12:01.629149 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-s6c4w" Jan 21 00:12:01 crc kubenswrapper[5118]: I0121 00:12:01.673212 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-s6c4w" Jan 21 00:12:02 crc kubenswrapper[5118]: I0121 00:12:02.052552 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vp4hf" Jan 21 00:12:02 crc kubenswrapper[5118]: I0121 00:12:02.478401 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-5gv2n"] Jan 21 00:12:02 crc kubenswrapper[5118]: I0121 00:12:02.988589 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-4frhj" Jan 21 00:12:02 crc kubenswrapper[5118]: I0121 00:12:02.988791 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4frhj" Jan 21 00:12:03 crc kubenswrapper[5118]: I0121 00:12:03.030135 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4frhj" Jan 21 00:12:03 crc kubenswrapper[5118]: I0121 00:12:03.051190 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-s6c4w" Jan 21 00:12:03 crc kubenswrapper[5118]: I0121 00:12:03.416625 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mlmpf" Jan 21 00:12:03 crc kubenswrapper[5118]: I0121 00:12:03.416678 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-mlmpf" Jan 21 00:12:03 crc kubenswrapper[5118]: I0121 00:12:03.459413 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mlmpf" Jan 21 00:12:04 crc kubenswrapper[5118]: I0121 00:12:04.043067 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4frhj" Jan 21 00:12:04 crc kubenswrapper[5118]: I0121 00:12:04.054070 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mlmpf" Jan 21 00:12:04 crc kubenswrapper[5118]: I0121 00:12:04.054241 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-94zfv" Jan 21 00:12:04 crc kubenswrapper[5118]: I0121 00:12:04.054729 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-94zfv" Jan 21 00:12:04 crc kubenswrapper[5118]: I0121 00:12:04.152458 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-s6c4w"] Jan 21 00:12:04 crc kubenswrapper[5118]: I0121 00:12:04.155974 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Jan 21 00:12:04 crc kubenswrapper[5118]: I0121 00:12:04.157052 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="215f13ff-cdcf-43b0-9278-e1833f612ae9" containerName="pruner" Jan 21 00:12:04 crc kubenswrapper[5118]: I0121 00:12:04.157072 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="215f13ff-cdcf-43b0-9278-e1833f612ae9" containerName="pruner" Jan 21 00:12:04 crc kubenswrapper[5118]: I0121 00:12:04.157084 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="26d1a4fa-1469-4128-bd56-c9a122b28068" containerName="kube-multus-additional-cni-plugins" Jan 21 00:12:04 crc kubenswrapper[5118]: I0121 00:12:04.157090 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="26d1a4fa-1469-4128-bd56-c9a122b28068" containerName="kube-multus-additional-cni-plugins" Jan 21 00:12:04 crc kubenswrapper[5118]: I0121 00:12:04.157625 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="215f13ff-cdcf-43b0-9278-e1833f612ae9" containerName="pruner" Jan 21 00:12:04 crc kubenswrapper[5118]: I0121 00:12:04.157647 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="26d1a4fa-1469-4128-bd56-c9a122b28068" containerName="kube-multus-additional-cni-plugins" Jan 21 00:12:05 crc kubenswrapper[5118]: I0121 00:12:05.087549 5118 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-94zfv" podUID="8d65d512-7c64-462f-b40d-ad0252a88233" containerName="registry-server" probeResult="failure" output=< Jan 21 00:12:05 crc kubenswrapper[5118]: timeout: failed to connect service ":50051" within 1s Jan 21 00:12:05 crc kubenswrapper[5118]: > Jan 21 00:12:09 crc kubenswrapper[5118]: I0121 00:12:09.708037 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-jssxn" Jan 21 00:12:09 crc kubenswrapper[5118]: I0121 00:12:09.708704 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-s6c4w" podUID="f20612e2-22cd-486f-b881-af82d40bd144" containerName="registry-server" containerID="cri-o://9057b8184e7621c25bf1204803768b6924991fbb94d64cf36b670df60242f816" gracePeriod=2 Jan 21 00:12:09 crc kubenswrapper[5118]: I0121 00:12:09.709932 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 21 00:12:09 crc kubenswrapper[5118]: I0121 00:12:09.712471 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Jan 21 00:12:09 crc kubenswrapper[5118]: I0121 00:12:09.712547 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Jan 21 00:12:09 crc kubenswrapper[5118]: I0121 00:12:09.716148 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Jan 21 00:12:09 crc kubenswrapper[5118]: I0121 00:12:09.716282 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jssxn" Jan 21 00:12:09 crc kubenswrapper[5118]: I0121 00:12:09.716320 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vp4hf"] Jan 21 00:12:09 crc kubenswrapper[5118]: I0121 00:12:09.716392 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jssxn" Jan 21 00:12:09 crc kubenswrapper[5118]: I0121 00:12:09.716402 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mlmpf"] Jan 21 00:12:09 crc kubenswrapper[5118]: I0121 00:12:09.716415 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Jan 21 00:12:09 crc kubenswrapper[5118]: I0121 00:12:09.777834 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Jan 21 00:12:09 crc kubenswrapper[5118]: I0121 00:12:09.777942 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jssxn" Jan 21 00:12:09 crc kubenswrapper[5118]: I0121 00:12:09.778288 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vp4hf" podUID="3ee59881-8e70-4769-b92d-5df34a2b9130" containerName="registry-server" containerID="cri-o://e382babedd3773e22c31ea3b3da3339660ba3ba15a4a501c197b2a2008acc140" gracePeriod=2 Jan 21 00:12:09 crc kubenswrapper[5118]: I0121 00:12:09.778432 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Jan 21 00:12:09 crc kubenswrapper[5118]: I0121 00:12:09.807868 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f9b64d95-63f3-4084-bcf2-406ff2c75cee-kubelet-dir\") pod \"installer-12-crc\" (UID: \"f9b64d95-63f3-4084-bcf2-406ff2c75cee\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 21 00:12:09 crc kubenswrapper[5118]: I0121 00:12:09.808299 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f9b64d95-63f3-4084-bcf2-406ff2c75cee-kube-api-access\") pod \"installer-12-crc\" (UID: \"f9b64d95-63f3-4084-bcf2-406ff2c75cee\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 21 00:12:09 crc kubenswrapper[5118]: I0121 00:12:09.808362 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f9b64d95-63f3-4084-bcf2-406ff2c75cee-var-lock\") pod \"installer-12-crc\" (UID: \"f9b64d95-63f3-4084-bcf2-406ff2c75cee\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 21 00:12:09 crc kubenswrapper[5118]: I0121 00:12:09.808424 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/37e7976c-a0ea-499a-9750-ddc0ff02006d-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"37e7976c-a0ea-499a-9750-ddc0ff02006d\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 21 00:12:09 crc kubenswrapper[5118]: I0121 00:12:09.808482 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/37e7976c-a0ea-499a-9750-ddc0ff02006d-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"37e7976c-a0ea-499a-9750-ddc0ff02006d\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 21 00:12:09 crc kubenswrapper[5118]: I0121 00:12:09.910148 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f9b64d95-63f3-4084-bcf2-406ff2c75cee-var-lock\") pod \"installer-12-crc\" (UID: \"f9b64d95-63f3-4084-bcf2-406ff2c75cee\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 21 00:12:09 crc kubenswrapper[5118]: I0121 00:12:09.910282 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/37e7976c-a0ea-499a-9750-ddc0ff02006d-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"37e7976c-a0ea-499a-9750-ddc0ff02006d\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 21 00:12:09 crc kubenswrapper[5118]: I0121 00:12:09.910340 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/37e7976c-a0ea-499a-9750-ddc0ff02006d-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"37e7976c-a0ea-499a-9750-ddc0ff02006d\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 21 00:12:09 crc kubenswrapper[5118]: I0121 00:12:09.910389 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f9b64d95-63f3-4084-bcf2-406ff2c75cee-kubelet-dir\") pod \"installer-12-crc\" (UID: \"f9b64d95-63f3-4084-bcf2-406ff2c75cee\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 21 00:12:09 crc kubenswrapper[5118]: I0121 00:12:09.910455 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f9b64d95-63f3-4084-bcf2-406ff2c75cee-kube-api-access\") pod \"installer-12-crc\" (UID: \"f9b64d95-63f3-4084-bcf2-406ff2c75cee\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 21 00:12:09 crc kubenswrapper[5118]: I0121 00:12:09.911005 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f9b64d95-63f3-4084-bcf2-406ff2c75cee-var-lock\") pod \"installer-12-crc\" (UID: \"f9b64d95-63f3-4084-bcf2-406ff2c75cee\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 21 00:12:09 crc kubenswrapper[5118]: I0121 00:12:09.911282 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/37e7976c-a0ea-499a-9750-ddc0ff02006d-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"37e7976c-a0ea-499a-9750-ddc0ff02006d\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 21 00:12:09 crc kubenswrapper[5118]: I0121 00:12:09.911321 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f9b64d95-63f3-4084-bcf2-406ff2c75cee-kubelet-dir\") pod \"installer-12-crc\" (UID: \"f9b64d95-63f3-4084-bcf2-406ff2c75cee\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 21 00:12:09 crc kubenswrapper[5118]: I0121 00:12:09.930757 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f9b64d95-63f3-4084-bcf2-406ff2c75cee-kube-api-access\") pod \"installer-12-crc\" (UID: \"f9b64d95-63f3-4084-bcf2-406ff2c75cee\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 21 00:12:09 crc kubenswrapper[5118]: I0121 00:12:09.933660 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/37e7976c-a0ea-499a-9750-ddc0ff02006d-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"37e7976c-a0ea-499a-9750-ddc0ff02006d\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 21 00:12:10 crc kubenswrapper[5118]: I0121 00:12:10.031135 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 21 00:12:10 crc kubenswrapper[5118]: I0121 00:12:10.039787 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mlmpf" podUID="5346f11a-11bb-4650-8de5-7988e8cb2bba" containerName="registry-server" containerID="cri-o://f6aa4fc4d80c9e9a954c43c2f176c793f1551616871f70ee447a92a1ad677cc2" gracePeriod=2 Jan 21 00:12:10 crc kubenswrapper[5118]: I0121 00:12:10.102489 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Jan 21 00:12:10 crc kubenswrapper[5118]: I0121 00:12:10.259806 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Jan 21 00:12:11 crc kubenswrapper[5118]: I0121 00:12:11.043598 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6c5wr" Jan 21 00:12:11 crc kubenswrapper[5118]: I0121 00:12:11.047381 5118 generic.go:358] "Generic (PLEG): container finished" podID="3ee59881-8e70-4769-b92d-5df34a2b9130" containerID="e382babedd3773e22c31ea3b3da3339660ba3ba15a4a501c197b2a2008acc140" exitCode=0 Jan 21 00:12:11 crc kubenswrapper[5118]: I0121 00:12:11.047456 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vp4hf" event={"ID":"3ee59881-8e70-4769-b92d-5df34a2b9130","Type":"ContainerDied","Data":"e382babedd3773e22c31ea3b3da3339660ba3ba15a4a501c197b2a2008acc140"} Jan 21 00:12:11 crc kubenswrapper[5118]: I0121 00:12:11.049849 5118 generic.go:358] "Generic (PLEG): container finished" podID="f20612e2-22cd-486f-b881-af82d40bd144" containerID="9057b8184e7621c25bf1204803768b6924991fbb94d64cf36b670df60242f816" exitCode=0 Jan 21 00:12:11 crc kubenswrapper[5118]: I0121 00:12:11.049891 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s6c4w" event={"ID":"f20612e2-22cd-486f-b881-af82d40bd144","Type":"ContainerDied","Data":"9057b8184e7621c25bf1204803768b6924991fbb94d64cf36b670df60242f816"} Jan 21 00:12:11 crc kubenswrapper[5118]: I0121 00:12:11.094976 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jssxn"] Jan 21 00:12:11 crc kubenswrapper[5118]: W0121 00:12:11.238890 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod37e7976c_a0ea_499a_9750_ddc0ff02006d.slice/crio-a820cacf4c60682e38c51319306609c96ad87f93a4f4117c4f6dfbad2562ab0e WatchSource:0}: Error finding container a820cacf4c60682e38c51319306609c96ad87f93a4f4117c4f6dfbad2562ab0e: Status 404 returned error can't find the container with id a820cacf4c60682e38c51319306609c96ad87f93a4f4117c4f6dfbad2562ab0e Jan 21 00:12:11 crc kubenswrapper[5118]: I0121 00:12:11.292169 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s6c4w" Jan 21 00:12:11 crc kubenswrapper[5118]: I0121 00:12:11.329005 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f20612e2-22cd-486f-b881-af82d40bd144-catalog-content\") pod \"f20612e2-22cd-486f-b881-af82d40bd144\" (UID: \"f20612e2-22cd-486f-b881-af82d40bd144\") " Jan 21 00:12:11 crc kubenswrapper[5118]: I0121 00:12:11.329051 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfsgd\" (UniqueName: \"kubernetes.io/projected/f20612e2-22cd-486f-b881-af82d40bd144-kube-api-access-kfsgd\") pod \"f20612e2-22cd-486f-b881-af82d40bd144\" (UID: \"f20612e2-22cd-486f-b881-af82d40bd144\") " Jan 21 00:12:11 crc kubenswrapper[5118]: I0121 00:12:11.329109 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f20612e2-22cd-486f-b881-af82d40bd144-utilities\") pod \"f20612e2-22cd-486f-b881-af82d40bd144\" (UID: \"f20612e2-22cd-486f-b881-af82d40bd144\") " Jan 21 00:12:11 crc kubenswrapper[5118]: I0121 00:12:11.330365 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f20612e2-22cd-486f-b881-af82d40bd144-utilities" (OuterVolumeSpecName: "utilities") pod "f20612e2-22cd-486f-b881-af82d40bd144" (UID: "f20612e2-22cd-486f-b881-af82d40bd144"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 00:12:11 crc kubenswrapper[5118]: I0121 00:12:11.347362 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f20612e2-22cd-486f-b881-af82d40bd144-kube-api-access-kfsgd" (OuterVolumeSpecName: "kube-api-access-kfsgd") pod "f20612e2-22cd-486f-b881-af82d40bd144" (UID: "f20612e2-22cd-486f-b881-af82d40bd144"). InnerVolumeSpecName "kube-api-access-kfsgd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:12:11 crc kubenswrapper[5118]: I0121 00:12:11.428531 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Jan 21 00:12:11 crc kubenswrapper[5118]: I0121 00:12:11.430516 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kfsgd\" (UniqueName: \"kubernetes.io/projected/f20612e2-22cd-486f-b881-af82d40bd144-kube-api-access-kfsgd\") on node \"crc\" DevicePath \"\"" Jan 21 00:12:11 crc kubenswrapper[5118]: I0121 00:12:11.430557 5118 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f20612e2-22cd-486f-b881-af82d40bd144-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 00:12:11 crc kubenswrapper[5118]: I0121 00:12:11.513230 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f20612e2-22cd-486f-b881-af82d40bd144-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f20612e2-22cd-486f-b881-af82d40bd144" (UID: "f20612e2-22cd-486f-b881-af82d40bd144"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 00:12:11 crc kubenswrapper[5118]: I0121 00:12:11.531723 5118 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f20612e2-22cd-486f-b881-af82d40bd144-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 00:12:11 crc kubenswrapper[5118]: I0121 00:12:11.633591 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vp4hf" Jan 21 00:12:11 crc kubenswrapper[5118]: I0121 00:12:11.646379 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mlmpf" Jan 21 00:12:11 crc kubenswrapper[5118]: I0121 00:12:11.733865 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5346f11a-11bb-4650-8de5-7988e8cb2bba-utilities\") pod \"5346f11a-11bb-4650-8de5-7988e8cb2bba\" (UID: \"5346f11a-11bb-4650-8de5-7988e8cb2bba\") " Jan 21 00:12:11 crc kubenswrapper[5118]: I0121 00:12:11.733951 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-22lph\" (UniqueName: \"kubernetes.io/projected/3ee59881-8e70-4769-b92d-5df34a2b9130-kube-api-access-22lph\") pod \"3ee59881-8e70-4769-b92d-5df34a2b9130\" (UID: \"3ee59881-8e70-4769-b92d-5df34a2b9130\") " Jan 21 00:12:11 crc kubenswrapper[5118]: I0121 00:12:11.734036 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ee59881-8e70-4769-b92d-5df34a2b9130-catalog-content\") pod \"3ee59881-8e70-4769-b92d-5df34a2b9130\" (UID: \"3ee59881-8e70-4769-b92d-5df34a2b9130\") " Jan 21 00:12:11 crc kubenswrapper[5118]: I0121 00:12:11.734092 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2mq4d\" (UniqueName: \"kubernetes.io/projected/5346f11a-11bb-4650-8de5-7988e8cb2bba-kube-api-access-2mq4d\") pod \"5346f11a-11bb-4650-8de5-7988e8cb2bba\" (UID: \"5346f11a-11bb-4650-8de5-7988e8cb2bba\") " Jan 21 00:12:11 crc kubenswrapper[5118]: I0121 00:12:11.734213 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ee59881-8e70-4769-b92d-5df34a2b9130-utilities\") pod \"3ee59881-8e70-4769-b92d-5df34a2b9130\" (UID: \"3ee59881-8e70-4769-b92d-5df34a2b9130\") " Jan 21 00:12:11 crc kubenswrapper[5118]: I0121 00:12:11.734285 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5346f11a-11bb-4650-8de5-7988e8cb2bba-catalog-content\") pod \"5346f11a-11bb-4650-8de5-7988e8cb2bba\" (UID: \"5346f11a-11bb-4650-8de5-7988e8cb2bba\") " Jan 21 00:12:11 crc kubenswrapper[5118]: I0121 00:12:11.735275 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5346f11a-11bb-4650-8de5-7988e8cb2bba-utilities" (OuterVolumeSpecName: "utilities") pod "5346f11a-11bb-4650-8de5-7988e8cb2bba" (UID: "5346f11a-11bb-4650-8de5-7988e8cb2bba"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 00:12:11 crc kubenswrapper[5118]: I0121 00:12:11.735311 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ee59881-8e70-4769-b92d-5df34a2b9130-utilities" (OuterVolumeSpecName: "utilities") pod "3ee59881-8e70-4769-b92d-5df34a2b9130" (UID: "3ee59881-8e70-4769-b92d-5df34a2b9130"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 00:12:11 crc kubenswrapper[5118]: I0121 00:12:11.741146 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ee59881-8e70-4769-b92d-5df34a2b9130-kube-api-access-22lph" (OuterVolumeSpecName: "kube-api-access-22lph") pod "3ee59881-8e70-4769-b92d-5df34a2b9130" (UID: "3ee59881-8e70-4769-b92d-5df34a2b9130"). InnerVolumeSpecName "kube-api-access-22lph". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:12:11 crc kubenswrapper[5118]: I0121 00:12:11.742142 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5346f11a-11bb-4650-8de5-7988e8cb2bba-kube-api-access-2mq4d" (OuterVolumeSpecName: "kube-api-access-2mq4d") pod "5346f11a-11bb-4650-8de5-7988e8cb2bba" (UID: "5346f11a-11bb-4650-8de5-7988e8cb2bba"). InnerVolumeSpecName "kube-api-access-2mq4d". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:12:11 crc kubenswrapper[5118]: I0121 00:12:11.755508 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5346f11a-11bb-4650-8de5-7988e8cb2bba-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5346f11a-11bb-4650-8de5-7988e8cb2bba" (UID: "5346f11a-11bb-4650-8de5-7988e8cb2bba"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 00:12:11 crc kubenswrapper[5118]: I0121 00:12:11.765701 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ee59881-8e70-4769-b92d-5df34a2b9130-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3ee59881-8e70-4769-b92d-5df34a2b9130" (UID: "3ee59881-8e70-4769-b92d-5df34a2b9130"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 00:12:11 crc kubenswrapper[5118]: I0121 00:12:11.835520 5118 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ee59881-8e70-4769-b92d-5df34a2b9130-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 00:12:11 crc kubenswrapper[5118]: I0121 00:12:11.835552 5118 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5346f11a-11bb-4650-8de5-7988e8cb2bba-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 00:12:11 crc kubenswrapper[5118]: I0121 00:12:11.835562 5118 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5346f11a-11bb-4650-8de5-7988e8cb2bba-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 00:12:11 crc kubenswrapper[5118]: I0121 00:12:11.835571 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-22lph\" (UniqueName: \"kubernetes.io/projected/3ee59881-8e70-4769-b92d-5df34a2b9130-kube-api-access-22lph\") on node \"crc\" DevicePath \"\"" Jan 21 00:12:11 crc kubenswrapper[5118]: I0121 00:12:11.835581 5118 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ee59881-8e70-4769-b92d-5df34a2b9130-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 00:12:11 crc kubenswrapper[5118]: I0121 00:12:11.835589 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2mq4d\" (UniqueName: \"kubernetes.io/projected/5346f11a-11bb-4650-8de5-7988e8cb2bba-kube-api-access-2mq4d\") on node \"crc\" DevicePath \"\"" Jan 21 00:12:12 crc kubenswrapper[5118]: I0121 00:12:12.053947 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-s5pql" Jan 21 00:12:12 crc kubenswrapper[5118]: I0121 00:12:12.065435 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vp4hf" event={"ID":"3ee59881-8e70-4769-b92d-5df34a2b9130","Type":"ContainerDied","Data":"ddd1c15ef5b2696ac24d79911a84ba947ac73b25484518561e2454b7a1f7e5fe"} Jan 21 00:12:12 crc kubenswrapper[5118]: I0121 00:12:12.065485 5118 scope.go:117] "RemoveContainer" containerID="e382babedd3773e22c31ea3b3da3339660ba3ba15a4a501c197b2a2008acc140" Jan 21 00:12:12 crc kubenswrapper[5118]: I0121 00:12:12.065606 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vp4hf" Jan 21 00:12:12 crc kubenswrapper[5118]: I0121 00:12:12.067879 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"f9b64d95-63f3-4084-bcf2-406ff2c75cee","Type":"ContainerStarted","Data":"df4fb1143d2b8e115a19f6481627d02f274d9c6b53eb5c0dc70566f3fe2473dd"} Jan 21 00:12:12 crc kubenswrapper[5118]: I0121 00:12:12.067967 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"f9b64d95-63f3-4084-bcf2-406ff2c75cee","Type":"ContainerStarted","Data":"b332af179a85acb373a4603b4144a4dd23feddfed42479a5b7ab50c73a79c4cb"} Jan 21 00:12:12 crc kubenswrapper[5118]: I0121 00:12:12.072117 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s6c4w" event={"ID":"f20612e2-22cd-486f-b881-af82d40bd144","Type":"ContainerDied","Data":"1fab0350ac91f797f39adcd128591ff7294013db463957042cfdac25c3886971"} Jan 21 00:12:12 crc kubenswrapper[5118]: I0121 00:12:12.072187 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s6c4w" Jan 21 00:12:12 crc kubenswrapper[5118]: I0121 00:12:12.094617 5118 generic.go:358] "Generic (PLEG): container finished" podID="5346f11a-11bb-4650-8de5-7988e8cb2bba" containerID="f6aa4fc4d80c9e9a954c43c2f176c793f1551616871f70ee447a92a1ad677cc2" exitCode=0 Jan 21 00:12:12 crc kubenswrapper[5118]: I0121 00:12:12.094768 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mlmpf" Jan 21 00:12:12 crc kubenswrapper[5118]: I0121 00:12:12.094874 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mlmpf" event={"ID":"5346f11a-11bb-4650-8de5-7988e8cb2bba","Type":"ContainerDied","Data":"f6aa4fc4d80c9e9a954c43c2f176c793f1551616871f70ee447a92a1ad677cc2"} Jan 21 00:12:12 crc kubenswrapper[5118]: I0121 00:12:12.094918 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mlmpf" event={"ID":"5346f11a-11bb-4650-8de5-7988e8cb2bba","Type":"ContainerDied","Data":"173e3d498ece2647f4611dbd89757ab9daacd8d1b93a02af79d93bf72e675a9a"} Jan 21 00:12:12 crc kubenswrapper[5118]: I0121 00:12:12.097517 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"37e7976c-a0ea-499a-9750-ddc0ff02006d","Type":"ContainerStarted","Data":"d734c53754016433b6483f2c753c5a09bfdf2f31ced6ccd8dafbd164f8a5aa12"} Jan 21 00:12:12 crc kubenswrapper[5118]: I0121 00:12:12.097568 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"37e7976c-a0ea-499a-9750-ddc0ff02006d","Type":"ContainerStarted","Data":"a820cacf4c60682e38c51319306609c96ad87f93a4f4117c4f6dfbad2562ab0e"} Jan 21 00:12:12 crc kubenswrapper[5118]: I0121 00:12:12.097849 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jssxn" podUID="a4b72480-d05d-4b1a-9b30-0d3e80ea6249" containerName="registry-server" containerID="cri-o://cb0bf453cfba4b585f0e472106c21d7f07b33bcf62d3b1705980f26fa79a0dbb" gracePeriod=2 Jan 21 00:12:12 crc kubenswrapper[5118]: I0121 00:12:12.109608 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-12-crc" podStartSLOduration=3.10958912 podStartE2EDuration="3.10958912s" podCreationTimestamp="2026-01-21 00:12:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:12:12.105891076 +0000 UTC m=+187.430138114" watchObservedRunningTime="2026-01-21 00:12:12.10958912 +0000 UTC m=+187.433836138" Jan 21 00:12:12 crc kubenswrapper[5118]: I0121 00:12:12.122969 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-s6c4w"] Jan 21 00:12:12 crc kubenswrapper[5118]: I0121 00:12:12.126375 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-s6c4w"] Jan 21 00:12:12 crc kubenswrapper[5118]: I0121 00:12:12.136644 5118 scope.go:117] "RemoveContainer" containerID="736538107d9a3bd0dcd131d07877a6353bec420c76db6a5f3774e2e2a504fbbf" Jan 21 00:12:12 crc kubenswrapper[5118]: I0121 00:12:12.140672 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-12-crc" podStartSLOduration=8.140651608 podStartE2EDuration="8.140651608s" podCreationTimestamp="2026-01-21 00:12:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:12:12.136887703 +0000 UTC m=+187.461134711" watchObservedRunningTime="2026-01-21 00:12:12.140651608 +0000 UTC m=+187.464898626" Jan 21 00:12:12 crc kubenswrapper[5118]: I0121 00:12:12.157947 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vp4hf"] Jan 21 00:12:12 crc kubenswrapper[5118]: I0121 00:12:12.160533 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vp4hf"] Jan 21 00:12:12 crc kubenswrapper[5118]: I0121 00:12:12.162529 5118 scope.go:117] "RemoveContainer" containerID="2e0699377325df9171380a1f21d033638c1575666d74a8e0db5997379f8ecd64" Jan 21 00:12:12 crc kubenswrapper[5118]: I0121 00:12:12.170618 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mlmpf"] Jan 21 00:12:12 crc kubenswrapper[5118]: I0121 00:12:12.173871 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mlmpf"] Jan 21 00:12:12 crc kubenswrapper[5118]: I0121 00:12:12.235357 5118 scope.go:117] "RemoveContainer" containerID="9057b8184e7621c25bf1204803768b6924991fbb94d64cf36b670df60242f816" Jan 21 00:12:12 crc kubenswrapper[5118]: I0121 00:12:12.278315 5118 scope.go:117] "RemoveContainer" containerID="6dbb779b1531e72003ebd601e9389f21f87b86a67ab62411fe8585d7689e8d31" Jan 21 00:12:12 crc kubenswrapper[5118]: I0121 00:12:12.313822 5118 scope.go:117] "RemoveContainer" containerID="9e7b3bbb75db1e0c17cb689b1fca3ea06271f5e524a66b3c84d7b178658c950a" Jan 21 00:12:12 crc kubenswrapper[5118]: I0121 00:12:12.337765 5118 scope.go:117] "RemoveContainer" containerID="f6aa4fc4d80c9e9a954c43c2f176c793f1551616871f70ee447a92a1ad677cc2" Jan 21 00:12:12 crc kubenswrapper[5118]: I0121 00:12:12.353217 5118 scope.go:117] "RemoveContainer" containerID="1f4f5398d1d647dea6419a97362d216c1c8c24ec582354d40541ac1c8dfaf9ad" Jan 21 00:12:12 crc kubenswrapper[5118]: I0121 00:12:12.394001 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jssxn" Jan 21 00:12:12 crc kubenswrapper[5118]: I0121 00:12:12.402561 5118 scope.go:117] "RemoveContainer" containerID="764bdc246fb04d2d4abaabb1979042f3858b5ae71628243a62d9f523ba022448" Jan 21 00:12:12 crc kubenswrapper[5118]: I0121 00:12:12.442790 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ssr2x\" (UniqueName: \"kubernetes.io/projected/a4b72480-d05d-4b1a-9b30-0d3e80ea6249-kube-api-access-ssr2x\") pod \"a4b72480-d05d-4b1a-9b30-0d3e80ea6249\" (UID: \"a4b72480-d05d-4b1a-9b30-0d3e80ea6249\") " Jan 21 00:12:12 crc kubenswrapper[5118]: I0121 00:12:12.442865 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4b72480-d05d-4b1a-9b30-0d3e80ea6249-catalog-content\") pod \"a4b72480-d05d-4b1a-9b30-0d3e80ea6249\" (UID: \"a4b72480-d05d-4b1a-9b30-0d3e80ea6249\") " Jan 21 00:12:12 crc kubenswrapper[5118]: I0121 00:12:12.442941 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4b72480-d05d-4b1a-9b30-0d3e80ea6249-utilities\") pod \"a4b72480-d05d-4b1a-9b30-0d3e80ea6249\" (UID: \"a4b72480-d05d-4b1a-9b30-0d3e80ea6249\") " Jan 21 00:12:12 crc kubenswrapper[5118]: I0121 00:12:12.443922 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4b72480-d05d-4b1a-9b30-0d3e80ea6249-utilities" (OuterVolumeSpecName: "utilities") pod "a4b72480-d05d-4b1a-9b30-0d3e80ea6249" (UID: "a4b72480-d05d-4b1a-9b30-0d3e80ea6249"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 00:12:12 crc kubenswrapper[5118]: I0121 00:12:12.447148 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4b72480-d05d-4b1a-9b30-0d3e80ea6249-kube-api-access-ssr2x" (OuterVolumeSpecName: "kube-api-access-ssr2x") pod "a4b72480-d05d-4b1a-9b30-0d3e80ea6249" (UID: "a4b72480-d05d-4b1a-9b30-0d3e80ea6249"). InnerVolumeSpecName "kube-api-access-ssr2x". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:12:12 crc kubenswrapper[5118]: I0121 00:12:12.481866 5118 scope.go:117] "RemoveContainer" containerID="f6aa4fc4d80c9e9a954c43c2f176c793f1551616871f70ee447a92a1ad677cc2" Jan 21 00:12:12 crc kubenswrapper[5118]: E0121 00:12:12.482437 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6aa4fc4d80c9e9a954c43c2f176c793f1551616871f70ee447a92a1ad677cc2\": container with ID starting with f6aa4fc4d80c9e9a954c43c2f176c793f1551616871f70ee447a92a1ad677cc2 not found: ID does not exist" containerID="f6aa4fc4d80c9e9a954c43c2f176c793f1551616871f70ee447a92a1ad677cc2" Jan 21 00:12:12 crc kubenswrapper[5118]: I0121 00:12:12.482481 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6aa4fc4d80c9e9a954c43c2f176c793f1551616871f70ee447a92a1ad677cc2"} err="failed to get container status \"f6aa4fc4d80c9e9a954c43c2f176c793f1551616871f70ee447a92a1ad677cc2\": rpc error: code = NotFound desc = could not find container \"f6aa4fc4d80c9e9a954c43c2f176c793f1551616871f70ee447a92a1ad677cc2\": container with ID starting with f6aa4fc4d80c9e9a954c43c2f176c793f1551616871f70ee447a92a1ad677cc2 not found: ID does not exist" Jan 21 00:12:12 crc kubenswrapper[5118]: I0121 00:12:12.482564 5118 scope.go:117] "RemoveContainer" containerID="1f4f5398d1d647dea6419a97362d216c1c8c24ec582354d40541ac1c8dfaf9ad" Jan 21 00:12:12 crc kubenswrapper[5118]: E0121 00:12:12.482803 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f4f5398d1d647dea6419a97362d216c1c8c24ec582354d40541ac1c8dfaf9ad\": container with ID starting with 1f4f5398d1d647dea6419a97362d216c1c8c24ec582354d40541ac1c8dfaf9ad not found: ID does not exist" containerID="1f4f5398d1d647dea6419a97362d216c1c8c24ec582354d40541ac1c8dfaf9ad" Jan 21 00:12:12 crc kubenswrapper[5118]: I0121 00:12:12.482837 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f4f5398d1d647dea6419a97362d216c1c8c24ec582354d40541ac1c8dfaf9ad"} err="failed to get container status \"1f4f5398d1d647dea6419a97362d216c1c8c24ec582354d40541ac1c8dfaf9ad\": rpc error: code = NotFound desc = could not find container \"1f4f5398d1d647dea6419a97362d216c1c8c24ec582354d40541ac1c8dfaf9ad\": container with ID starting with 1f4f5398d1d647dea6419a97362d216c1c8c24ec582354d40541ac1c8dfaf9ad not found: ID does not exist" Jan 21 00:12:12 crc kubenswrapper[5118]: I0121 00:12:12.482854 5118 scope.go:117] "RemoveContainer" containerID="764bdc246fb04d2d4abaabb1979042f3858b5ae71628243a62d9f523ba022448" Jan 21 00:12:12 crc kubenswrapper[5118]: E0121 00:12:12.483248 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"764bdc246fb04d2d4abaabb1979042f3858b5ae71628243a62d9f523ba022448\": container with ID starting with 764bdc246fb04d2d4abaabb1979042f3858b5ae71628243a62d9f523ba022448 not found: ID does not exist" containerID="764bdc246fb04d2d4abaabb1979042f3858b5ae71628243a62d9f523ba022448" Jan 21 00:12:12 crc kubenswrapper[5118]: I0121 00:12:12.483279 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"764bdc246fb04d2d4abaabb1979042f3858b5ae71628243a62d9f523ba022448"} err="failed to get container status \"764bdc246fb04d2d4abaabb1979042f3858b5ae71628243a62d9f523ba022448\": rpc error: code = NotFound desc = could not find container \"764bdc246fb04d2d4abaabb1979042f3858b5ae71628243a62d9f523ba022448\": container with ID starting with 764bdc246fb04d2d4abaabb1979042f3858b5ae71628243a62d9f523ba022448 not found: ID does not exist" Jan 21 00:12:12 crc kubenswrapper[5118]: I0121 00:12:12.544846 5118 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4b72480-d05d-4b1a-9b30-0d3e80ea6249-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 00:12:12 crc kubenswrapper[5118]: I0121 00:12:12.544903 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ssr2x\" (UniqueName: \"kubernetes.io/projected/a4b72480-d05d-4b1a-9b30-0d3e80ea6249-kube-api-access-ssr2x\") on node \"crc\" DevicePath \"\"" Jan 21 00:12:12 crc kubenswrapper[5118]: I0121 00:12:12.561504 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4b72480-d05d-4b1a-9b30-0d3e80ea6249-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a4b72480-d05d-4b1a-9b30-0d3e80ea6249" (UID: "a4b72480-d05d-4b1a-9b30-0d3e80ea6249"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 00:12:12 crc kubenswrapper[5118]: I0121 00:12:12.646727 5118 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4b72480-d05d-4b1a-9b30-0d3e80ea6249-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 00:12:12 crc kubenswrapper[5118]: I0121 00:12:12.984784 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ee59881-8e70-4769-b92d-5df34a2b9130" path="/var/lib/kubelet/pods/3ee59881-8e70-4769-b92d-5df34a2b9130/volumes" Jan 21 00:12:12 crc kubenswrapper[5118]: I0121 00:12:12.985421 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5346f11a-11bb-4650-8de5-7988e8cb2bba" path="/var/lib/kubelet/pods/5346f11a-11bb-4650-8de5-7988e8cb2bba/volumes" Jan 21 00:12:12 crc kubenswrapper[5118]: I0121 00:12:12.985967 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f20612e2-22cd-486f-b881-af82d40bd144" path="/var/lib/kubelet/pods/f20612e2-22cd-486f-b881-af82d40bd144/volumes" Jan 21 00:12:13 crc kubenswrapper[5118]: I0121 00:12:13.106336 5118 generic.go:358] "Generic (PLEG): container finished" podID="37e7976c-a0ea-499a-9750-ddc0ff02006d" containerID="d734c53754016433b6483f2c753c5a09bfdf2f31ced6ccd8dafbd164f8a5aa12" exitCode=0 Jan 21 00:12:13 crc kubenswrapper[5118]: I0121 00:12:13.106449 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"37e7976c-a0ea-499a-9750-ddc0ff02006d","Type":"ContainerDied","Data":"d734c53754016433b6483f2c753c5a09bfdf2f31ced6ccd8dafbd164f8a5aa12"} Jan 21 00:12:13 crc kubenswrapper[5118]: I0121 00:12:13.115097 5118 generic.go:358] "Generic (PLEG): container finished" podID="a4b72480-d05d-4b1a-9b30-0d3e80ea6249" containerID="cb0bf453cfba4b585f0e472106c21d7f07b33bcf62d3b1705980f26fa79a0dbb" exitCode=0 Jan 21 00:12:13 crc kubenswrapper[5118]: I0121 00:12:13.115227 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jssxn" Jan 21 00:12:13 crc kubenswrapper[5118]: I0121 00:12:13.115238 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jssxn" event={"ID":"a4b72480-d05d-4b1a-9b30-0d3e80ea6249","Type":"ContainerDied","Data":"cb0bf453cfba4b585f0e472106c21d7f07b33bcf62d3b1705980f26fa79a0dbb"} Jan 21 00:12:13 crc kubenswrapper[5118]: I0121 00:12:13.115261 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jssxn" event={"ID":"a4b72480-d05d-4b1a-9b30-0d3e80ea6249","Type":"ContainerDied","Data":"d179a10ab8dc512dbf748cb54fbac343e86ffa735addc35494f6528b1d83a95c"} Jan 21 00:12:13 crc kubenswrapper[5118]: I0121 00:12:13.115277 5118 scope.go:117] "RemoveContainer" containerID="cb0bf453cfba4b585f0e472106c21d7f07b33bcf62d3b1705980f26fa79a0dbb" Jan 21 00:12:13 crc kubenswrapper[5118]: I0121 00:12:13.137459 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jssxn"] Jan 21 00:12:13 crc kubenswrapper[5118]: I0121 00:12:13.140557 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jssxn"] Jan 21 00:12:13 crc kubenswrapper[5118]: I0121 00:12:13.142150 5118 scope.go:117] "RemoveContainer" containerID="66f553be86b261ed50480cb7b063522bab219fbb6e2153941caa9305a30aa6b1" Jan 21 00:12:13 crc kubenswrapper[5118]: I0121 00:12:13.155551 5118 scope.go:117] "RemoveContainer" containerID="94fbe24572a04ebeedb2588530d72197a47094babf66f631703d42161954206b" Jan 21 00:12:13 crc kubenswrapper[5118]: I0121 00:12:13.167872 5118 scope.go:117] "RemoveContainer" containerID="cb0bf453cfba4b585f0e472106c21d7f07b33bcf62d3b1705980f26fa79a0dbb" Jan 21 00:12:13 crc kubenswrapper[5118]: E0121 00:12:13.168251 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb0bf453cfba4b585f0e472106c21d7f07b33bcf62d3b1705980f26fa79a0dbb\": container with ID starting with cb0bf453cfba4b585f0e472106c21d7f07b33bcf62d3b1705980f26fa79a0dbb not found: ID does not exist" containerID="cb0bf453cfba4b585f0e472106c21d7f07b33bcf62d3b1705980f26fa79a0dbb" Jan 21 00:12:13 crc kubenswrapper[5118]: I0121 00:12:13.168281 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb0bf453cfba4b585f0e472106c21d7f07b33bcf62d3b1705980f26fa79a0dbb"} err="failed to get container status \"cb0bf453cfba4b585f0e472106c21d7f07b33bcf62d3b1705980f26fa79a0dbb\": rpc error: code = NotFound desc = could not find container \"cb0bf453cfba4b585f0e472106c21d7f07b33bcf62d3b1705980f26fa79a0dbb\": container with ID starting with cb0bf453cfba4b585f0e472106c21d7f07b33bcf62d3b1705980f26fa79a0dbb not found: ID does not exist" Jan 21 00:12:13 crc kubenswrapper[5118]: I0121 00:12:13.168304 5118 scope.go:117] "RemoveContainer" containerID="66f553be86b261ed50480cb7b063522bab219fbb6e2153941caa9305a30aa6b1" Jan 21 00:12:13 crc kubenswrapper[5118]: E0121 00:12:13.168527 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66f553be86b261ed50480cb7b063522bab219fbb6e2153941caa9305a30aa6b1\": container with ID starting with 66f553be86b261ed50480cb7b063522bab219fbb6e2153941caa9305a30aa6b1 not found: ID does not exist" containerID="66f553be86b261ed50480cb7b063522bab219fbb6e2153941caa9305a30aa6b1" Jan 21 00:12:13 crc kubenswrapper[5118]: I0121 00:12:13.168551 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66f553be86b261ed50480cb7b063522bab219fbb6e2153941caa9305a30aa6b1"} err="failed to get container status \"66f553be86b261ed50480cb7b063522bab219fbb6e2153941caa9305a30aa6b1\": rpc error: code = NotFound desc = could not find container \"66f553be86b261ed50480cb7b063522bab219fbb6e2153941caa9305a30aa6b1\": container with ID starting with 66f553be86b261ed50480cb7b063522bab219fbb6e2153941caa9305a30aa6b1 not found: ID does not exist" Jan 21 00:12:13 crc kubenswrapper[5118]: I0121 00:12:13.168566 5118 scope.go:117] "RemoveContainer" containerID="94fbe24572a04ebeedb2588530d72197a47094babf66f631703d42161954206b" Jan 21 00:12:13 crc kubenswrapper[5118]: E0121 00:12:13.168744 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94fbe24572a04ebeedb2588530d72197a47094babf66f631703d42161954206b\": container with ID starting with 94fbe24572a04ebeedb2588530d72197a47094babf66f631703d42161954206b not found: ID does not exist" containerID="94fbe24572a04ebeedb2588530d72197a47094babf66f631703d42161954206b" Jan 21 00:12:13 crc kubenswrapper[5118]: I0121 00:12:13.168772 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94fbe24572a04ebeedb2588530d72197a47094babf66f631703d42161954206b"} err="failed to get container status \"94fbe24572a04ebeedb2588530d72197a47094babf66f631703d42161954206b\": rpc error: code = NotFound desc = could not find container \"94fbe24572a04ebeedb2588530d72197a47094babf66f631703d42161954206b\": container with ID starting with 94fbe24572a04ebeedb2588530d72197a47094babf66f631703d42161954206b not found: ID does not exist" Jan 21 00:12:14 crc kubenswrapper[5118]: I0121 00:12:14.088812 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-94zfv" Jan 21 00:12:14 crc kubenswrapper[5118]: I0121 00:12:14.135874 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-94zfv" Jan 21 00:12:14 crc kubenswrapper[5118]: I0121 00:12:14.345469 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 21 00:12:14 crc kubenswrapper[5118]: I0121 00:12:14.365333 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/37e7976c-a0ea-499a-9750-ddc0ff02006d-kubelet-dir\") pod \"37e7976c-a0ea-499a-9750-ddc0ff02006d\" (UID: \"37e7976c-a0ea-499a-9750-ddc0ff02006d\") " Jan 21 00:12:14 crc kubenswrapper[5118]: I0121 00:12:14.365486 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/37e7976c-a0ea-499a-9750-ddc0ff02006d-kube-api-access\") pod \"37e7976c-a0ea-499a-9750-ddc0ff02006d\" (UID: \"37e7976c-a0ea-499a-9750-ddc0ff02006d\") " Jan 21 00:12:14 crc kubenswrapper[5118]: I0121 00:12:14.366100 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/37e7976c-a0ea-499a-9750-ddc0ff02006d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "37e7976c-a0ea-499a-9750-ddc0ff02006d" (UID: "37e7976c-a0ea-499a-9750-ddc0ff02006d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 00:12:14 crc kubenswrapper[5118]: I0121 00:12:14.370039 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37e7976c-a0ea-499a-9750-ddc0ff02006d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "37e7976c-a0ea-499a-9750-ddc0ff02006d" (UID: "37e7976c-a0ea-499a-9750-ddc0ff02006d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:12:14 crc kubenswrapper[5118]: I0121 00:12:14.468353 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/37e7976c-a0ea-499a-9750-ddc0ff02006d-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 00:12:14 crc kubenswrapper[5118]: I0121 00:12:14.468391 5118 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/37e7976c-a0ea-499a-9750-ddc0ff02006d-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 21 00:12:14 crc kubenswrapper[5118]: I0121 00:12:14.986656 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4b72480-d05d-4b1a-9b30-0d3e80ea6249" path="/var/lib/kubelet/pods/a4b72480-d05d-4b1a-9b30-0d3e80ea6249/volumes" Jan 21 00:12:15 crc kubenswrapper[5118]: I0121 00:12:15.143721 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 21 00:12:15 crc kubenswrapper[5118]: I0121 00:12:15.143761 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"37e7976c-a0ea-499a-9750-ddc0ff02006d","Type":"ContainerDied","Data":"a820cacf4c60682e38c51319306609c96ad87f93a4f4117c4f6dfbad2562ab0e"} Jan 21 00:12:15 crc kubenswrapper[5118]: I0121 00:12:15.143815 5118 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a820cacf4c60682e38c51319306609c96ad87f93a4f4117c4f6dfbad2562ab0e" Jan 21 00:12:26 crc kubenswrapper[5118]: I0121 00:12:26.905708 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 21 00:12:27 crc kubenswrapper[5118]: I0121 00:12:27.506101 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-66458b6674-5gv2n" podUID="19280e75-8f04-47d1-bc42-124082dfd247" containerName="oauth-openshift" containerID="cri-o://7c18869c859528ea916fd3e1d6ac70a3b59c0491590f8c2bad1b1e2b78cc4083" gracePeriod=15 Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.224073 5118 generic.go:358] "Generic (PLEG): container finished" podID="19280e75-8f04-47d1-bc42-124082dfd247" containerID="7c18869c859528ea916fd3e1d6ac70a3b59c0491590f8c2bad1b1e2b78cc4083" exitCode=0 Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.224231 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-5gv2n" event={"ID":"19280e75-8f04-47d1-bc42-124082dfd247","Type":"ContainerDied","Data":"7c18869c859528ea916fd3e1d6ac70a3b59c0491590f8c2bad1b1e2b78cc4083"} Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.394647 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-5gv2n" Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.424182 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-7bc64564f6-zx9lm"] Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.424718 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5346f11a-11bb-4650-8de5-7988e8cb2bba" containerName="extract-content" Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.424735 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="5346f11a-11bb-4650-8de5-7988e8cb2bba" containerName="extract-content" Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.424742 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5346f11a-11bb-4650-8de5-7988e8cb2bba" containerName="registry-server" Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.424748 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="5346f11a-11bb-4650-8de5-7988e8cb2bba" containerName="registry-server" Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.424757 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a4b72480-d05d-4b1a-9b30-0d3e80ea6249" containerName="extract-content" Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.424764 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4b72480-d05d-4b1a-9b30-0d3e80ea6249" containerName="extract-content" Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.424775 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="37e7976c-a0ea-499a-9750-ddc0ff02006d" containerName="pruner" Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.424781 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="37e7976c-a0ea-499a-9750-ddc0ff02006d" containerName="pruner" Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.424793 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a4b72480-d05d-4b1a-9b30-0d3e80ea6249" containerName="extract-utilities" Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.424799 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4b72480-d05d-4b1a-9b30-0d3e80ea6249" containerName="extract-utilities" Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.424809 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f20612e2-22cd-486f-b881-af82d40bd144" containerName="extract-utilities" Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.424815 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="f20612e2-22cd-486f-b881-af82d40bd144" containerName="extract-utilities" Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.424825 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f20612e2-22cd-486f-b881-af82d40bd144" containerName="registry-server" Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.424830 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="f20612e2-22cd-486f-b881-af82d40bd144" containerName="registry-server" Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.424842 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3ee59881-8e70-4769-b92d-5df34a2b9130" containerName="extract-utilities" Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.424848 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ee59881-8e70-4769-b92d-5df34a2b9130" containerName="extract-utilities" Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.424855 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a4b72480-d05d-4b1a-9b30-0d3e80ea6249" containerName="registry-server" Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.424860 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4b72480-d05d-4b1a-9b30-0d3e80ea6249" containerName="registry-server" Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.424868 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5346f11a-11bb-4650-8de5-7988e8cb2bba" containerName="extract-utilities" Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.424873 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="5346f11a-11bb-4650-8de5-7988e8cb2bba" containerName="extract-utilities" Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.424879 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3ee59881-8e70-4769-b92d-5df34a2b9130" containerName="registry-server" Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.424884 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ee59881-8e70-4769-b92d-5df34a2b9130" containerName="registry-server" Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.424890 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="19280e75-8f04-47d1-bc42-124082dfd247" containerName="oauth-openshift" Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.424895 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="19280e75-8f04-47d1-bc42-124082dfd247" containerName="oauth-openshift" Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.424904 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3ee59881-8e70-4769-b92d-5df34a2b9130" containerName="extract-content" Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.424909 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ee59881-8e70-4769-b92d-5df34a2b9130" containerName="extract-content" Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.424919 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f20612e2-22cd-486f-b881-af82d40bd144" containerName="extract-content" Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.424924 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="f20612e2-22cd-486f-b881-af82d40bd144" containerName="extract-content" Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.424999 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="37e7976c-a0ea-499a-9750-ddc0ff02006d" containerName="pruner" Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.425009 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="f20612e2-22cd-486f-b881-af82d40bd144" containerName="registry-server" Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.425019 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="3ee59881-8e70-4769-b92d-5df34a2b9130" containerName="registry-server" Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.425026 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="a4b72480-d05d-4b1a-9b30-0d3e80ea6249" containerName="registry-server" Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.425034 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="19280e75-8f04-47d1-bc42-124082dfd247" containerName="oauth-openshift" Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.425039 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="5346f11a-11bb-4650-8de5-7988e8cb2bba" containerName="registry-server" Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.428071 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7bc64564f6-zx9lm" Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.442251 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7bc64564f6-zx9lm"] Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.460553 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/19280e75-8f04-47d1-bc42-124082dfd247-v4-0-config-user-idp-0-file-data\") pod \"19280e75-8f04-47d1-bc42-124082dfd247\" (UID: \"19280e75-8f04-47d1-bc42-124082dfd247\") " Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.460644 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/19280e75-8f04-47d1-bc42-124082dfd247-audit-dir\") pod \"19280e75-8f04-47d1-bc42-124082dfd247\" (UID: \"19280e75-8f04-47d1-bc42-124082dfd247\") " Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.460693 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xz6wz\" (UniqueName: \"kubernetes.io/projected/19280e75-8f04-47d1-bc42-124082dfd247-kube-api-access-xz6wz\") pod \"19280e75-8f04-47d1-bc42-124082dfd247\" (UID: \"19280e75-8f04-47d1-bc42-124082dfd247\") " Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.460720 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/19280e75-8f04-47d1-bc42-124082dfd247-v4-0-config-user-template-login\") pod \"19280e75-8f04-47d1-bc42-124082dfd247\" (UID: \"19280e75-8f04-47d1-bc42-124082dfd247\") " Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.460773 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/19280e75-8f04-47d1-bc42-124082dfd247-v4-0-config-system-trusted-ca-bundle\") pod \"19280e75-8f04-47d1-bc42-124082dfd247\" (UID: \"19280e75-8f04-47d1-bc42-124082dfd247\") " Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.460837 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/19280e75-8f04-47d1-bc42-124082dfd247-v4-0-config-system-cliconfig\") pod \"19280e75-8f04-47d1-bc42-124082dfd247\" (UID: \"19280e75-8f04-47d1-bc42-124082dfd247\") " Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.460863 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/19280e75-8f04-47d1-bc42-124082dfd247-audit-policies\") pod \"19280e75-8f04-47d1-bc42-124082dfd247\" (UID: \"19280e75-8f04-47d1-bc42-124082dfd247\") " Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.460888 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/19280e75-8f04-47d1-bc42-124082dfd247-v4-0-config-system-ocp-branding-template\") pod \"19280e75-8f04-47d1-bc42-124082dfd247\" (UID: \"19280e75-8f04-47d1-bc42-124082dfd247\") " Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.460913 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/19280e75-8f04-47d1-bc42-124082dfd247-v4-0-config-system-session\") pod \"19280e75-8f04-47d1-bc42-124082dfd247\" (UID: \"19280e75-8f04-47d1-bc42-124082dfd247\") " Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.460945 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/19280e75-8f04-47d1-bc42-124082dfd247-v4-0-config-user-template-error\") pod \"19280e75-8f04-47d1-bc42-124082dfd247\" (UID: \"19280e75-8f04-47d1-bc42-124082dfd247\") " Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.460983 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/19280e75-8f04-47d1-bc42-124082dfd247-v4-0-config-system-serving-cert\") pod \"19280e75-8f04-47d1-bc42-124082dfd247\" (UID: \"19280e75-8f04-47d1-bc42-124082dfd247\") " Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.461024 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/19280e75-8f04-47d1-bc42-124082dfd247-v4-0-config-system-service-ca\") pod \"19280e75-8f04-47d1-bc42-124082dfd247\" (UID: \"19280e75-8f04-47d1-bc42-124082dfd247\") " Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.461055 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/19280e75-8f04-47d1-bc42-124082dfd247-v4-0-config-user-template-provider-selection\") pod \"19280e75-8f04-47d1-bc42-124082dfd247\" (UID: \"19280e75-8f04-47d1-bc42-124082dfd247\") " Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.461081 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/19280e75-8f04-47d1-bc42-124082dfd247-v4-0-config-system-router-certs\") pod \"19280e75-8f04-47d1-bc42-124082dfd247\" (UID: \"19280e75-8f04-47d1-bc42-124082dfd247\") " Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.462555 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19280e75-8f04-47d1-bc42-124082dfd247-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "19280e75-8f04-47d1-bc42-124082dfd247" (UID: "19280e75-8f04-47d1-bc42-124082dfd247"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.463256 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19280e75-8f04-47d1-bc42-124082dfd247-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "19280e75-8f04-47d1-bc42-124082dfd247" (UID: "19280e75-8f04-47d1-bc42-124082dfd247"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.463786 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19280e75-8f04-47d1-bc42-124082dfd247-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "19280e75-8f04-47d1-bc42-124082dfd247" (UID: "19280e75-8f04-47d1-bc42-124082dfd247"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.465311 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19280e75-8f04-47d1-bc42-124082dfd247-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "19280e75-8f04-47d1-bc42-124082dfd247" (UID: "19280e75-8f04-47d1-bc42-124082dfd247"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.465937 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19280e75-8f04-47d1-bc42-124082dfd247-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "19280e75-8f04-47d1-bc42-124082dfd247" (UID: "19280e75-8f04-47d1-bc42-124082dfd247"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.475726 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19280e75-8f04-47d1-bc42-124082dfd247-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "19280e75-8f04-47d1-bc42-124082dfd247" (UID: "19280e75-8f04-47d1-bc42-124082dfd247"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.476273 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19280e75-8f04-47d1-bc42-124082dfd247-kube-api-access-xz6wz" (OuterVolumeSpecName: "kube-api-access-xz6wz") pod "19280e75-8f04-47d1-bc42-124082dfd247" (UID: "19280e75-8f04-47d1-bc42-124082dfd247"). InnerVolumeSpecName "kube-api-access-xz6wz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.477411 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19280e75-8f04-47d1-bc42-124082dfd247-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "19280e75-8f04-47d1-bc42-124082dfd247" (UID: "19280e75-8f04-47d1-bc42-124082dfd247"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.479921 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19280e75-8f04-47d1-bc42-124082dfd247-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "19280e75-8f04-47d1-bc42-124082dfd247" (UID: "19280e75-8f04-47d1-bc42-124082dfd247"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.480403 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19280e75-8f04-47d1-bc42-124082dfd247-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "19280e75-8f04-47d1-bc42-124082dfd247" (UID: "19280e75-8f04-47d1-bc42-124082dfd247"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.487436 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19280e75-8f04-47d1-bc42-124082dfd247-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "19280e75-8f04-47d1-bc42-124082dfd247" (UID: "19280e75-8f04-47d1-bc42-124082dfd247"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.500381 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19280e75-8f04-47d1-bc42-124082dfd247-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "19280e75-8f04-47d1-bc42-124082dfd247" (UID: "19280e75-8f04-47d1-bc42-124082dfd247"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.500448 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19280e75-8f04-47d1-bc42-124082dfd247-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "19280e75-8f04-47d1-bc42-124082dfd247" (UID: "19280e75-8f04-47d1-bc42-124082dfd247"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.506477 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19280e75-8f04-47d1-bc42-124082dfd247-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "19280e75-8f04-47d1-bc42-124082dfd247" (UID: "19280e75-8f04-47d1-bc42-124082dfd247"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.562295 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/afeff785-84e2-4b7c-8619-3e05893565f3-v4-0-config-user-template-error\") pod \"oauth-openshift-7bc64564f6-zx9lm\" (UID: \"afeff785-84e2-4b7c-8619-3e05893565f3\") " pod="openshift-authentication/oauth-openshift-7bc64564f6-zx9lm" Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.562354 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/afeff785-84e2-4b7c-8619-3e05893565f3-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7bc64564f6-zx9lm\" (UID: \"afeff785-84e2-4b7c-8619-3e05893565f3\") " pod="openshift-authentication/oauth-openshift-7bc64564f6-zx9lm" Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.562380 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/afeff785-84e2-4b7c-8619-3e05893565f3-v4-0-config-system-session\") pod \"oauth-openshift-7bc64564f6-zx9lm\" (UID: \"afeff785-84e2-4b7c-8619-3e05893565f3\") " pod="openshift-authentication/oauth-openshift-7bc64564f6-zx9lm" Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.562427 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/afeff785-84e2-4b7c-8619-3e05893565f3-audit-policies\") pod \"oauth-openshift-7bc64564f6-zx9lm\" (UID: \"afeff785-84e2-4b7c-8619-3e05893565f3\") " pod="openshift-authentication/oauth-openshift-7bc64564f6-zx9lm" Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.562473 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/afeff785-84e2-4b7c-8619-3e05893565f3-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7bc64564f6-zx9lm\" (UID: \"afeff785-84e2-4b7c-8619-3e05893565f3\") " pod="openshift-authentication/oauth-openshift-7bc64564f6-zx9lm" Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.562497 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/afeff785-84e2-4b7c-8619-3e05893565f3-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7bc64564f6-zx9lm\" (UID: \"afeff785-84e2-4b7c-8619-3e05893565f3\") " pod="openshift-authentication/oauth-openshift-7bc64564f6-zx9lm" Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.562529 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/afeff785-84e2-4b7c-8619-3e05893565f3-audit-dir\") pod \"oauth-openshift-7bc64564f6-zx9lm\" (UID: \"afeff785-84e2-4b7c-8619-3e05893565f3\") " pod="openshift-authentication/oauth-openshift-7bc64564f6-zx9lm" Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.562563 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/afeff785-84e2-4b7c-8619-3e05893565f3-v4-0-config-system-service-ca\") pod \"oauth-openshift-7bc64564f6-zx9lm\" (UID: \"afeff785-84e2-4b7c-8619-3e05893565f3\") " pod="openshift-authentication/oauth-openshift-7bc64564f6-zx9lm" Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.562591 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/afeff785-84e2-4b7c-8619-3e05893565f3-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7bc64564f6-zx9lm\" (UID: \"afeff785-84e2-4b7c-8619-3e05893565f3\") " pod="openshift-authentication/oauth-openshift-7bc64564f6-zx9lm" Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.562616 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/afeff785-84e2-4b7c-8619-3e05893565f3-v4-0-config-user-template-login\") pod \"oauth-openshift-7bc64564f6-zx9lm\" (UID: \"afeff785-84e2-4b7c-8619-3e05893565f3\") " pod="openshift-authentication/oauth-openshift-7bc64564f6-zx9lm" Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.562637 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67d5s\" (UniqueName: \"kubernetes.io/projected/afeff785-84e2-4b7c-8619-3e05893565f3-kube-api-access-67d5s\") pod \"oauth-openshift-7bc64564f6-zx9lm\" (UID: \"afeff785-84e2-4b7c-8619-3e05893565f3\") " pod="openshift-authentication/oauth-openshift-7bc64564f6-zx9lm" Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.562665 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/afeff785-84e2-4b7c-8619-3e05893565f3-v4-0-config-system-router-certs\") pod \"oauth-openshift-7bc64564f6-zx9lm\" (UID: \"afeff785-84e2-4b7c-8619-3e05893565f3\") " pod="openshift-authentication/oauth-openshift-7bc64564f6-zx9lm" Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.562728 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/afeff785-84e2-4b7c-8619-3e05893565f3-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7bc64564f6-zx9lm\" (UID: \"afeff785-84e2-4b7c-8619-3e05893565f3\") " pod="openshift-authentication/oauth-openshift-7bc64564f6-zx9lm" Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.562759 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/afeff785-84e2-4b7c-8619-3e05893565f3-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7bc64564f6-zx9lm\" (UID: \"afeff785-84e2-4b7c-8619-3e05893565f3\") " pod="openshift-authentication/oauth-openshift-7bc64564f6-zx9lm" Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.562811 5118 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/19280e75-8f04-47d1-bc42-124082dfd247-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.562826 5118 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/19280e75-8f04-47d1-bc42-124082dfd247-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.562838 5118 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/19280e75-8f04-47d1-bc42-124082dfd247-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.562849 5118 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/19280e75-8f04-47d1-bc42-124082dfd247-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.562860 5118 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/19280e75-8f04-47d1-bc42-124082dfd247-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.562872 5118 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/19280e75-8f04-47d1-bc42-124082dfd247-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.562885 5118 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/19280e75-8f04-47d1-bc42-124082dfd247-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.562897 5118 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/19280e75-8f04-47d1-bc42-124082dfd247-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.562909 5118 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/19280e75-8f04-47d1-bc42-124082dfd247-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.562920 5118 reconciler_common.go:299] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/19280e75-8f04-47d1-bc42-124082dfd247-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.562930 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xz6wz\" (UniqueName: \"kubernetes.io/projected/19280e75-8f04-47d1-bc42-124082dfd247-kube-api-access-xz6wz\") on node \"crc\" DevicePath \"\"" Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.562941 5118 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/19280e75-8f04-47d1-bc42-124082dfd247-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.562953 5118 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/19280e75-8f04-47d1-bc42-124082dfd247-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.562964 5118 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/19280e75-8f04-47d1-bc42-124082dfd247-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.663718 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/afeff785-84e2-4b7c-8619-3e05893565f3-audit-policies\") pod \"oauth-openshift-7bc64564f6-zx9lm\" (UID: \"afeff785-84e2-4b7c-8619-3e05893565f3\") " pod="openshift-authentication/oauth-openshift-7bc64564f6-zx9lm" Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.663774 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/afeff785-84e2-4b7c-8619-3e05893565f3-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7bc64564f6-zx9lm\" (UID: \"afeff785-84e2-4b7c-8619-3e05893565f3\") " pod="openshift-authentication/oauth-openshift-7bc64564f6-zx9lm" Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.663799 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/afeff785-84e2-4b7c-8619-3e05893565f3-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7bc64564f6-zx9lm\" (UID: \"afeff785-84e2-4b7c-8619-3e05893565f3\") " pod="openshift-authentication/oauth-openshift-7bc64564f6-zx9lm" Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.664507 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/afeff785-84e2-4b7c-8619-3e05893565f3-audit-dir\") pod \"oauth-openshift-7bc64564f6-zx9lm\" (UID: \"afeff785-84e2-4b7c-8619-3e05893565f3\") " pod="openshift-authentication/oauth-openshift-7bc64564f6-zx9lm" Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.664581 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/afeff785-84e2-4b7c-8619-3e05893565f3-v4-0-config-system-service-ca\") pod \"oauth-openshift-7bc64564f6-zx9lm\" (UID: \"afeff785-84e2-4b7c-8619-3e05893565f3\") " pod="openshift-authentication/oauth-openshift-7bc64564f6-zx9lm" Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.664620 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/afeff785-84e2-4b7c-8619-3e05893565f3-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7bc64564f6-zx9lm\" (UID: \"afeff785-84e2-4b7c-8619-3e05893565f3\") " pod="openshift-authentication/oauth-openshift-7bc64564f6-zx9lm" Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.664636 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/afeff785-84e2-4b7c-8619-3e05893565f3-audit-dir\") pod \"oauth-openshift-7bc64564f6-zx9lm\" (UID: \"afeff785-84e2-4b7c-8619-3e05893565f3\") " pod="openshift-authentication/oauth-openshift-7bc64564f6-zx9lm" Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.664654 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/afeff785-84e2-4b7c-8619-3e05893565f3-v4-0-config-user-template-login\") pod \"oauth-openshift-7bc64564f6-zx9lm\" (UID: \"afeff785-84e2-4b7c-8619-3e05893565f3\") " pod="openshift-authentication/oauth-openshift-7bc64564f6-zx9lm" Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.664709 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-67d5s\" (UniqueName: \"kubernetes.io/projected/afeff785-84e2-4b7c-8619-3e05893565f3-kube-api-access-67d5s\") pod \"oauth-openshift-7bc64564f6-zx9lm\" (UID: \"afeff785-84e2-4b7c-8619-3e05893565f3\") " pod="openshift-authentication/oauth-openshift-7bc64564f6-zx9lm" Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.664745 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/afeff785-84e2-4b7c-8619-3e05893565f3-v4-0-config-system-router-certs\") pod \"oauth-openshift-7bc64564f6-zx9lm\" (UID: \"afeff785-84e2-4b7c-8619-3e05893565f3\") " pod="openshift-authentication/oauth-openshift-7bc64564f6-zx9lm" Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.664781 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/afeff785-84e2-4b7c-8619-3e05893565f3-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7bc64564f6-zx9lm\" (UID: \"afeff785-84e2-4b7c-8619-3e05893565f3\") " pod="openshift-authentication/oauth-openshift-7bc64564f6-zx9lm" Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.664840 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/afeff785-84e2-4b7c-8619-3e05893565f3-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7bc64564f6-zx9lm\" (UID: \"afeff785-84e2-4b7c-8619-3e05893565f3\") " pod="openshift-authentication/oauth-openshift-7bc64564f6-zx9lm" Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.664887 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/afeff785-84e2-4b7c-8619-3e05893565f3-audit-policies\") pod \"oauth-openshift-7bc64564f6-zx9lm\" (UID: \"afeff785-84e2-4b7c-8619-3e05893565f3\") " pod="openshift-authentication/oauth-openshift-7bc64564f6-zx9lm" Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.664939 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/afeff785-84e2-4b7c-8619-3e05893565f3-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7bc64564f6-zx9lm\" (UID: \"afeff785-84e2-4b7c-8619-3e05893565f3\") " pod="openshift-authentication/oauth-openshift-7bc64564f6-zx9lm" Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.664990 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/afeff785-84e2-4b7c-8619-3e05893565f3-v4-0-config-user-template-error\") pod \"oauth-openshift-7bc64564f6-zx9lm\" (UID: \"afeff785-84e2-4b7c-8619-3e05893565f3\") " pod="openshift-authentication/oauth-openshift-7bc64564f6-zx9lm" Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.665033 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/afeff785-84e2-4b7c-8619-3e05893565f3-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7bc64564f6-zx9lm\" (UID: \"afeff785-84e2-4b7c-8619-3e05893565f3\") " pod="openshift-authentication/oauth-openshift-7bc64564f6-zx9lm" Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.665065 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/afeff785-84e2-4b7c-8619-3e05893565f3-v4-0-config-system-session\") pod \"oauth-openshift-7bc64564f6-zx9lm\" (UID: \"afeff785-84e2-4b7c-8619-3e05893565f3\") " pod="openshift-authentication/oauth-openshift-7bc64564f6-zx9lm" Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.665382 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/afeff785-84e2-4b7c-8619-3e05893565f3-v4-0-config-system-service-ca\") pod \"oauth-openshift-7bc64564f6-zx9lm\" (UID: \"afeff785-84e2-4b7c-8619-3e05893565f3\") " pod="openshift-authentication/oauth-openshift-7bc64564f6-zx9lm" Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.666393 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/afeff785-84e2-4b7c-8619-3e05893565f3-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7bc64564f6-zx9lm\" (UID: \"afeff785-84e2-4b7c-8619-3e05893565f3\") " pod="openshift-authentication/oauth-openshift-7bc64564f6-zx9lm" Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.668593 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/afeff785-84e2-4b7c-8619-3e05893565f3-v4-0-config-system-router-certs\") pod \"oauth-openshift-7bc64564f6-zx9lm\" (UID: \"afeff785-84e2-4b7c-8619-3e05893565f3\") " pod="openshift-authentication/oauth-openshift-7bc64564f6-zx9lm" Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.668599 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/afeff785-84e2-4b7c-8619-3e05893565f3-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7bc64564f6-zx9lm\" (UID: \"afeff785-84e2-4b7c-8619-3e05893565f3\") " pod="openshift-authentication/oauth-openshift-7bc64564f6-zx9lm" Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.669127 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/afeff785-84e2-4b7c-8619-3e05893565f3-v4-0-config-user-template-error\") pod \"oauth-openshift-7bc64564f6-zx9lm\" (UID: \"afeff785-84e2-4b7c-8619-3e05893565f3\") " pod="openshift-authentication/oauth-openshift-7bc64564f6-zx9lm" Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.669141 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/afeff785-84e2-4b7c-8619-3e05893565f3-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7bc64564f6-zx9lm\" (UID: \"afeff785-84e2-4b7c-8619-3e05893565f3\") " pod="openshift-authentication/oauth-openshift-7bc64564f6-zx9lm" Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.669249 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/afeff785-84e2-4b7c-8619-3e05893565f3-v4-0-config-user-template-login\") pod \"oauth-openshift-7bc64564f6-zx9lm\" (UID: \"afeff785-84e2-4b7c-8619-3e05893565f3\") " pod="openshift-authentication/oauth-openshift-7bc64564f6-zx9lm" Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.669623 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/afeff785-84e2-4b7c-8619-3e05893565f3-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7bc64564f6-zx9lm\" (UID: \"afeff785-84e2-4b7c-8619-3e05893565f3\") " pod="openshift-authentication/oauth-openshift-7bc64564f6-zx9lm" Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.670737 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/afeff785-84e2-4b7c-8619-3e05893565f3-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7bc64564f6-zx9lm\" (UID: \"afeff785-84e2-4b7c-8619-3e05893565f3\") " pod="openshift-authentication/oauth-openshift-7bc64564f6-zx9lm" Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.670854 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/afeff785-84e2-4b7c-8619-3e05893565f3-v4-0-config-system-session\") pod \"oauth-openshift-7bc64564f6-zx9lm\" (UID: \"afeff785-84e2-4b7c-8619-3e05893565f3\") " pod="openshift-authentication/oauth-openshift-7bc64564f6-zx9lm" Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.685572 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-67d5s\" (UniqueName: \"kubernetes.io/projected/afeff785-84e2-4b7c-8619-3e05893565f3-kube-api-access-67d5s\") pod \"oauth-openshift-7bc64564f6-zx9lm\" (UID: \"afeff785-84e2-4b7c-8619-3e05893565f3\") " pod="openshift-authentication/oauth-openshift-7bc64564f6-zx9lm" Jan 21 00:12:28 crc kubenswrapper[5118]: I0121 00:12:28.744404 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7bc64564f6-zx9lm" Jan 21 00:12:29 crc kubenswrapper[5118]: I0121 00:12:29.151736 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7bc64564f6-zx9lm"] Jan 21 00:12:29 crc kubenswrapper[5118]: I0121 00:12:29.232243 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7bc64564f6-zx9lm" event={"ID":"afeff785-84e2-4b7c-8619-3e05893565f3","Type":"ContainerStarted","Data":"39b5a010891b03f2d46568dccb19f576faf2ac28c2fbcbc11bef1b41fc2745d9"} Jan 21 00:12:29 crc kubenswrapper[5118]: I0121 00:12:29.235446 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-5gv2n" event={"ID":"19280e75-8f04-47d1-bc42-124082dfd247","Type":"ContainerDied","Data":"15606bf42ae247b88c664efe79aa9a26cf4dd7ebf4a45d199bffae59f4c423e2"} Jan 21 00:12:29 crc kubenswrapper[5118]: I0121 00:12:29.235481 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-5gv2n" Jan 21 00:12:29 crc kubenswrapper[5118]: I0121 00:12:29.235520 5118 scope.go:117] "RemoveContainer" containerID="7c18869c859528ea916fd3e1d6ac70a3b59c0491590f8c2bad1b1e2b78cc4083" Jan 21 00:12:29 crc kubenswrapper[5118]: I0121 00:12:29.258664 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-5gv2n"] Jan 21 00:12:29 crc kubenswrapper[5118]: I0121 00:12:29.265722 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-5gv2n"] Jan 21 00:12:30 crc kubenswrapper[5118]: I0121 00:12:30.242215 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7bc64564f6-zx9lm" event={"ID":"afeff785-84e2-4b7c-8619-3e05893565f3","Type":"ContainerStarted","Data":"1f0a00ba209d603697ed2a12cce7dcc91a9b925d2a26c75f8539fb59b291de59"} Jan 21 00:12:30 crc kubenswrapper[5118]: I0121 00:12:30.242689 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-7bc64564f6-zx9lm" Jan 21 00:12:30 crc kubenswrapper[5118]: I0121 00:12:30.250297 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-7bc64564f6-zx9lm" Jan 21 00:12:30 crc kubenswrapper[5118]: I0121 00:12:30.266518 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-7bc64564f6-zx9lm" podStartSLOduration=28.266502392 podStartE2EDuration="28.266502392s" podCreationTimestamp="2026-01-21 00:12:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:12:30.266279176 +0000 UTC m=+205.590526224" watchObservedRunningTime="2026-01-21 00:12:30.266502392 +0000 UTC m=+205.590749410" Jan 21 00:12:30 crc kubenswrapper[5118]: I0121 00:12:30.989773 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19280e75-8f04-47d1-bc42-124082dfd247" path="/var/lib/kubelet/pods/19280e75-8f04-47d1-bc42-124082dfd247/volumes" Jan 21 00:12:41 crc kubenswrapper[5118]: I0121 00:12:41.242543 5118 ???:1] "http: TLS handshake error from 192.168.126.11:51468: no serving certificate available for the kubelet" Jan 21 00:12:49 crc kubenswrapper[5118]: I0121 00:12:49.375747 5118 generic.go:358] "Generic (PLEG): container finished" podID="ae767afd-59d5-4c04-9ecc-f9ae7b317698" containerID="feaa79e58645e3c369904a4c700a10edd742db3b375824cd728bb262eb7a3678" exitCode=0 Jan 21 00:12:49 crc kubenswrapper[5118]: I0121 00:12:49.376003 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29482560-n7qwb" event={"ID":"ae767afd-59d5-4c04-9ecc-f9ae7b317698","Type":"ContainerDied","Data":"feaa79e58645e3c369904a4c700a10edd742db3b375824cd728bb262eb7a3678"} Jan 21 00:12:49 crc kubenswrapper[5118]: I0121 00:12:49.608841 5118 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 21 00:12:49 crc kubenswrapper[5118]: I0121 00:12:49.617837 5118 kubelet.go:2547] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 21 00:12:49 crc kubenswrapper[5118]: I0121 00:12:49.618044 5118 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 21 00:12:49 crc kubenswrapper[5118]: I0121 00:12:49.618059 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 00:12:49 crc kubenswrapper[5118]: I0121 00:12:49.618671 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Jan 21 00:12:49 crc kubenswrapper[5118]: I0121 00:12:49.618755 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Jan 21 00:12:49 crc kubenswrapper[5118]: I0121 00:12:49.618831 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Jan 21 00:12:49 crc kubenswrapper[5118]: I0121 00:12:49.618899 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Jan 21 00:12:49 crc kubenswrapper[5118]: I0121 00:12:49.619014 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Jan 21 00:12:49 crc kubenswrapper[5118]: I0121 00:12:49.619469 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Jan 21 00:12:49 crc kubenswrapper[5118]: I0121 00:12:49.619641 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 21 00:12:49 crc kubenswrapper[5118]: I0121 00:12:49.619760 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 21 00:12:49 crc kubenswrapper[5118]: I0121 00:12:49.618712 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://e4ebe604136e4069bea4c85be28e37c74101654b01f8f97f9dd6c5e6d2089369" gracePeriod=15 Jan 21 00:12:49 crc kubenswrapper[5118]: I0121 00:12:49.619044 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" containerID="cri-o://24f0d293f59538f561797000d7d6d3bf5cf65c588d7307bbc362958b5c993376" gracePeriod=15 Jan 21 00:12:49 crc kubenswrapper[5118]: I0121 00:12:49.618640 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" containerID="cri-o://fe7f2363564030a94c2b5aaa62bb4c132cd16d168cb151c5b3b9447b2e1be495" gracePeriod=15 Jan 21 00:12:49 crc kubenswrapper[5118]: I0121 00:12:49.618706 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://5420e1997e1749ea320776a71a52444ce92ddbd5ca9427f3c9efe5b951c27370" gracePeriod=15 Jan 21 00:12:49 crc kubenswrapper[5118]: I0121 00:12:49.618737 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" containerID="cri-o://958df862a38bf6ffb6e4b273a5f651ea46a717132f869a0fffedeb2ed05ad6a1" gracePeriod=15 Jan 21 00:12:49 crc kubenswrapper[5118]: I0121 00:12:49.620286 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Jan 21 00:12:49 crc kubenswrapper[5118]: I0121 00:12:49.620587 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Jan 21 00:12:49 crc kubenswrapper[5118]: I0121 00:12:49.620809 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 21 00:12:49 crc kubenswrapper[5118]: I0121 00:12:49.620906 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 21 00:12:49 crc kubenswrapper[5118]: I0121 00:12:49.621850 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 21 00:12:49 crc kubenswrapper[5118]: I0121 00:12:49.621872 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 21 00:12:49 crc kubenswrapper[5118]: I0121 00:12:49.621882 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="setup" Jan 21 00:12:49 crc kubenswrapper[5118]: I0121 00:12:49.621900 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="setup" Jan 21 00:12:49 crc kubenswrapper[5118]: I0121 00:12:49.621929 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 21 00:12:49 crc kubenswrapper[5118]: I0121 00:12:49.621937 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 21 00:12:49 crc kubenswrapper[5118]: I0121 00:12:49.621951 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 21 00:12:49 crc kubenswrapper[5118]: I0121 00:12:49.621959 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 21 00:12:49 crc kubenswrapper[5118]: I0121 00:12:49.622269 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 21 00:12:49 crc kubenswrapper[5118]: I0121 00:12:49.622292 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 21 00:12:49 crc kubenswrapper[5118]: I0121 00:12:49.622306 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 21 00:12:49 crc kubenswrapper[5118]: I0121 00:12:49.622425 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 21 00:12:49 crc kubenswrapper[5118]: I0121 00:12:49.622439 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Jan 21 00:12:49 crc kubenswrapper[5118]: I0121 00:12:49.622452 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Jan 21 00:12:49 crc kubenswrapper[5118]: I0121 00:12:49.622469 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Jan 21 00:12:49 crc kubenswrapper[5118]: I0121 00:12:49.622489 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Jan 21 00:12:49 crc kubenswrapper[5118]: I0121 00:12:49.622801 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 21 00:12:49 crc kubenswrapper[5118]: I0121 00:12:49.624991 5118 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="3a14caf222afb62aaabdc47808b6f944" podUID="57755cc5f99000cc11e193051474d4e2" Jan 21 00:12:49 crc kubenswrapper[5118]: I0121 00:12:49.648785 5118 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 00:12:49 crc kubenswrapper[5118]: I0121 00:12:49.759297 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 00:12:49 crc kubenswrapper[5118]: I0121 00:12:49.759338 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 00:12:49 crc kubenswrapper[5118]: I0121 00:12:49.759362 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 00:12:49 crc kubenswrapper[5118]: I0121 00:12:49.759378 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 00:12:49 crc kubenswrapper[5118]: I0121 00:12:49.759400 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 00:12:49 crc kubenswrapper[5118]: I0121 00:12:49.759442 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 00:12:49 crc kubenswrapper[5118]: I0121 00:12:49.759461 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 00:12:49 crc kubenswrapper[5118]: I0121 00:12:49.759482 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 00:12:49 crc kubenswrapper[5118]: I0121 00:12:49.759508 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 00:12:49 crc kubenswrapper[5118]: I0121 00:12:49.759522 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 00:12:49 crc kubenswrapper[5118]: E0121 00:12:49.836434 5118 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 00:12:49 crc kubenswrapper[5118]: E0121 00:12:49.836854 5118 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 00:12:49 crc kubenswrapper[5118]: E0121 00:12:49.837054 5118 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 00:12:49 crc kubenswrapper[5118]: E0121 00:12:49.837274 5118 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 00:12:49 crc kubenswrapper[5118]: E0121 00:12:49.837459 5118 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 00:12:49 crc kubenswrapper[5118]: I0121 00:12:49.837485 5118 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 21 00:12:49 crc kubenswrapper[5118]: E0121 00:12:49.837749 5118 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.4:6443: connect: connection refused" interval="200ms" Jan 21 00:12:49 crc kubenswrapper[5118]: I0121 00:12:49.860642 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 00:12:49 crc kubenswrapper[5118]: I0121 00:12:49.860713 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 00:12:49 crc kubenswrapper[5118]: I0121 00:12:49.860749 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 00:12:49 crc kubenswrapper[5118]: I0121 00:12:49.860775 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 00:12:49 crc kubenswrapper[5118]: I0121 00:12:49.860798 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 00:12:49 crc kubenswrapper[5118]: I0121 00:12:49.860825 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 00:12:49 crc kubenswrapper[5118]: I0121 00:12:49.860866 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 00:12:49 crc kubenswrapper[5118]: I0121 00:12:49.860889 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 00:12:49 crc kubenswrapper[5118]: I0121 00:12:49.860980 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 00:12:49 crc kubenswrapper[5118]: I0121 00:12:49.861014 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 00:12:49 crc kubenswrapper[5118]: I0121 00:12:49.861109 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 00:12:49 crc kubenswrapper[5118]: I0121 00:12:49.861359 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 00:12:49 crc kubenswrapper[5118]: I0121 00:12:49.861446 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 00:12:49 crc kubenswrapper[5118]: I0121 00:12:49.861491 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 00:12:49 crc kubenswrapper[5118]: I0121 00:12:49.861731 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 00:12:49 crc kubenswrapper[5118]: I0121 00:12:49.861763 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 00:12:49 crc kubenswrapper[5118]: I0121 00:12:49.861795 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 00:12:49 crc kubenswrapper[5118]: I0121 00:12:49.861800 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 00:12:49 crc kubenswrapper[5118]: I0121 00:12:49.861822 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 00:12:49 crc kubenswrapper[5118]: I0121 00:12:49.861844 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 00:12:50 crc kubenswrapper[5118]: E0121 00:12:50.038911 5118 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.4:6443: connect: connection refused" interval="400ms" Jan 21 00:12:50 crc kubenswrapper[5118]: I0121 00:12:50.383947 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Jan 21 00:12:50 crc kubenswrapper[5118]: I0121 00:12:50.385781 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Jan 21 00:12:50 crc kubenswrapper[5118]: I0121 00:12:50.386452 5118 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="24f0d293f59538f561797000d7d6d3bf5cf65c588d7307bbc362958b5c993376" exitCode=0 Jan 21 00:12:50 crc kubenswrapper[5118]: I0121 00:12:50.386486 5118 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="5420e1997e1749ea320776a71a52444ce92ddbd5ca9427f3c9efe5b951c27370" exitCode=0 Jan 21 00:12:50 crc kubenswrapper[5118]: I0121 00:12:50.386495 5118 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="e4ebe604136e4069bea4c85be28e37c74101654b01f8f97f9dd6c5e6d2089369" exitCode=0 Jan 21 00:12:50 crc kubenswrapper[5118]: I0121 00:12:50.386503 5118 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="958df862a38bf6ffb6e4b273a5f651ea46a717132f869a0fffedeb2ed05ad6a1" exitCode=2 Jan 21 00:12:50 crc kubenswrapper[5118]: I0121 00:12:50.386571 5118 scope.go:117] "RemoveContainer" containerID="4192feefd5cfa0d044d233324b963c0d52fc469217c4faad785dda349a30a38b" Jan 21 00:12:50 crc kubenswrapper[5118]: I0121 00:12:50.387996 5118 generic.go:358] "Generic (PLEG): container finished" podID="f9b64d95-63f3-4084-bcf2-406ff2c75cee" containerID="df4fb1143d2b8e115a19f6481627d02f274d9c6b53eb5c0dc70566f3fe2473dd" exitCode=0 Jan 21 00:12:50 crc kubenswrapper[5118]: I0121 00:12:50.388061 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"f9b64d95-63f3-4084-bcf2-406ff2c75cee","Type":"ContainerDied","Data":"df4fb1143d2b8e115a19f6481627d02f274d9c6b53eb5c0dc70566f3fe2473dd"} Jan 21 00:12:50 crc kubenswrapper[5118]: I0121 00:12:50.388939 5118 status_manager.go:895] "Failed to get status for pod" podUID="f9b64d95-63f3-4084-bcf2-406ff2c75cee" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 00:12:50 crc kubenswrapper[5118]: E0121 00:12:50.439660 5118 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.4:6443: connect: connection refused" interval="800ms" Jan 21 00:12:50 crc kubenswrapper[5118]: I0121 00:12:50.549247 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29482560-n7qwb" Jan 21 00:12:50 crc kubenswrapper[5118]: I0121 00:12:50.550071 5118 status_manager.go:895] "Failed to get status for pod" podUID="f9b64d95-63f3-4084-bcf2-406ff2c75cee" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 00:12:50 crc kubenswrapper[5118]: I0121 00:12:50.550527 5118 status_manager.go:895] "Failed to get status for pod" podUID="ae767afd-59d5-4c04-9ecc-f9ae7b317698" pod="openshift-image-registry/image-pruner-29482560-n7qwb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-pruner-29482560-n7qwb\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 00:12:50 crc kubenswrapper[5118]: I0121 00:12:50.670152 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/ae767afd-59d5-4c04-9ecc-f9ae7b317698-serviceca\") pod \"ae767afd-59d5-4c04-9ecc-f9ae7b317698\" (UID: \"ae767afd-59d5-4c04-9ecc-f9ae7b317698\") " Jan 21 00:12:50 crc kubenswrapper[5118]: I0121 00:12:50.670355 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j5k9z\" (UniqueName: \"kubernetes.io/projected/ae767afd-59d5-4c04-9ecc-f9ae7b317698-kube-api-access-j5k9z\") pod \"ae767afd-59d5-4c04-9ecc-f9ae7b317698\" (UID: \"ae767afd-59d5-4c04-9ecc-f9ae7b317698\") " Jan 21 00:12:50 crc kubenswrapper[5118]: I0121 00:12:50.671097 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae767afd-59d5-4c04-9ecc-f9ae7b317698-serviceca" (OuterVolumeSpecName: "serviceca") pod "ae767afd-59d5-4c04-9ecc-f9ae7b317698" (UID: "ae767afd-59d5-4c04-9ecc-f9ae7b317698"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 00:12:50 crc kubenswrapper[5118]: I0121 00:12:50.681291 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae767afd-59d5-4c04-9ecc-f9ae7b317698-kube-api-access-j5k9z" (OuterVolumeSpecName: "kube-api-access-j5k9z") pod "ae767afd-59d5-4c04-9ecc-f9ae7b317698" (UID: "ae767afd-59d5-4c04-9ecc-f9ae7b317698"). InnerVolumeSpecName "kube-api-access-j5k9z". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:12:50 crc kubenswrapper[5118]: I0121 00:12:50.772035 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-j5k9z\" (UniqueName: \"kubernetes.io/projected/ae767afd-59d5-4c04-9ecc-f9ae7b317698-kube-api-access-j5k9z\") on node \"crc\" DevicePath \"\"" Jan 21 00:12:50 crc kubenswrapper[5118]: I0121 00:12:50.772082 5118 reconciler_common.go:299] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/ae767afd-59d5-4c04-9ecc-f9ae7b317698-serviceca\") on node \"crc\" DevicePath \"\"" Jan 21 00:12:51 crc kubenswrapper[5118]: E0121 00:12:51.240771 5118 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.4:6443: connect: connection refused" interval="1.6s" Jan 21 00:12:51 crc kubenswrapper[5118]: I0121 00:12:51.398064 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29482560-n7qwb" Jan 21 00:12:51 crc kubenswrapper[5118]: I0121 00:12:51.398195 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29482560-n7qwb" event={"ID":"ae767afd-59d5-4c04-9ecc-f9ae7b317698","Type":"ContainerDied","Data":"c5fb597f64097be0d3b8fbd22e355dc53d92d2fedac71dc8e7f20e463d206c0d"} Jan 21 00:12:51 crc kubenswrapper[5118]: I0121 00:12:51.398342 5118 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5fb597f64097be0d3b8fbd22e355dc53d92d2fedac71dc8e7f20e463d206c0d" Jan 21 00:12:51 crc kubenswrapper[5118]: I0121 00:12:51.403694 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Jan 21 00:12:51 crc kubenswrapper[5118]: I0121 00:12:51.404037 5118 status_manager.go:895] "Failed to get status for pod" podUID="f9b64d95-63f3-4084-bcf2-406ff2c75cee" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 00:12:51 crc kubenswrapper[5118]: I0121 00:12:51.404431 5118 status_manager.go:895] "Failed to get status for pod" podUID="ae767afd-59d5-4c04-9ecc-f9ae7b317698" pod="openshift-image-registry/image-pruner-29482560-n7qwb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-pruner-29482560-n7qwb\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 00:12:51 crc kubenswrapper[5118]: I0121 00:12:51.632531 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Jan 21 00:12:51 crc kubenswrapper[5118]: I0121 00:12:51.633301 5118 status_manager.go:895] "Failed to get status for pod" podUID="f9b64d95-63f3-4084-bcf2-406ff2c75cee" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 00:12:51 crc kubenswrapper[5118]: I0121 00:12:51.633471 5118 status_manager.go:895] "Failed to get status for pod" podUID="ae767afd-59d5-4c04-9ecc-f9ae7b317698" pod="openshift-image-registry/image-pruner-29482560-n7qwb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-pruner-29482560-n7qwb\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 00:12:51 crc kubenswrapper[5118]: I0121 00:12:51.689950 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f9b64d95-63f3-4084-bcf2-406ff2c75cee-kube-api-access\") pod \"f9b64d95-63f3-4084-bcf2-406ff2c75cee\" (UID: \"f9b64d95-63f3-4084-bcf2-406ff2c75cee\") " Jan 21 00:12:51 crc kubenswrapper[5118]: I0121 00:12:51.690018 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f9b64d95-63f3-4084-bcf2-406ff2c75cee-var-lock\") pod \"f9b64d95-63f3-4084-bcf2-406ff2c75cee\" (UID: \"f9b64d95-63f3-4084-bcf2-406ff2c75cee\") " Jan 21 00:12:51 crc kubenswrapper[5118]: I0121 00:12:51.690240 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f9b64d95-63f3-4084-bcf2-406ff2c75cee-kubelet-dir\") pod \"f9b64d95-63f3-4084-bcf2-406ff2c75cee\" (UID: \"f9b64d95-63f3-4084-bcf2-406ff2c75cee\") " Jan 21 00:12:51 crc kubenswrapper[5118]: I0121 00:12:51.690673 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f9b64d95-63f3-4084-bcf2-406ff2c75cee-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "f9b64d95-63f3-4084-bcf2-406ff2c75cee" (UID: "f9b64d95-63f3-4084-bcf2-406ff2c75cee"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 00:12:51 crc kubenswrapper[5118]: I0121 00:12:51.690734 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f9b64d95-63f3-4084-bcf2-406ff2c75cee-var-lock" (OuterVolumeSpecName: "var-lock") pod "f9b64d95-63f3-4084-bcf2-406ff2c75cee" (UID: "f9b64d95-63f3-4084-bcf2-406ff2c75cee"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 00:12:51 crc kubenswrapper[5118]: I0121 00:12:51.698229 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9b64d95-63f3-4084-bcf2-406ff2c75cee-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f9b64d95-63f3-4084-bcf2-406ff2c75cee" (UID: "f9b64d95-63f3-4084-bcf2-406ff2c75cee"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:12:51 crc kubenswrapper[5118]: I0121 00:12:51.791980 5118 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f9b64d95-63f3-4084-bcf2-406ff2c75cee-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 21 00:12:51 crc kubenswrapper[5118]: I0121 00:12:51.792029 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f9b64d95-63f3-4084-bcf2-406ff2c75cee-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 00:12:51 crc kubenswrapper[5118]: I0121 00:12:51.792119 5118 reconciler_common.go:299] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f9b64d95-63f3-4084-bcf2-406ff2c75cee-var-lock\") on node \"crc\" DevicePath \"\"" Jan 21 00:12:52 crc kubenswrapper[5118]: I0121 00:12:52.023535 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Jan 21 00:12:52 crc kubenswrapper[5118]: I0121 00:12:52.025069 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 00:12:52 crc kubenswrapper[5118]: I0121 00:12:52.025710 5118 status_manager.go:895] "Failed to get status for pod" podUID="f9b64d95-63f3-4084-bcf2-406ff2c75cee" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 00:12:52 crc kubenswrapper[5118]: I0121 00:12:52.026100 5118 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 00:12:52 crc kubenswrapper[5118]: I0121 00:12:52.026348 5118 status_manager.go:895] "Failed to get status for pod" podUID="ae767afd-59d5-4c04-9ecc-f9ae7b317698" pod="openshift-image-registry/image-pruner-29482560-n7qwb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-pruner-29482560-n7qwb\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 00:12:52 crc kubenswrapper[5118]: I0121 00:12:52.095856 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Jan 21 00:12:52 crc kubenswrapper[5118]: I0121 00:12:52.095913 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Jan 21 00:12:52 crc kubenswrapper[5118]: I0121 00:12:52.095948 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Jan 21 00:12:52 crc kubenswrapper[5118]: I0121 00:12:52.096064 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Jan 21 00:12:52 crc kubenswrapper[5118]: I0121 00:12:52.096072 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 00:12:52 crc kubenswrapper[5118]: I0121 00:12:52.096244 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Jan 21 00:12:52 crc kubenswrapper[5118]: I0121 00:12:52.096399 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 00:12:52 crc kubenswrapper[5118]: I0121 00:12:52.096537 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 00:12:52 crc kubenswrapper[5118]: I0121 00:12:52.096736 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir" (OuterVolumeSpecName: "ca-bundle-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "ca-bundle-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 00:12:52 crc kubenswrapper[5118]: I0121 00:12:52.096761 5118 reconciler_common.go:299] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 21 00:12:52 crc kubenswrapper[5118]: I0121 00:12:52.096790 5118 reconciler_common.go:299] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 21 00:12:52 crc kubenswrapper[5118]: I0121 00:12:52.096807 5118 reconciler_common.go:299] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 21 00:12:52 crc kubenswrapper[5118]: I0121 00:12:52.097971 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 00:12:52 crc kubenswrapper[5118]: I0121 00:12:52.197838 5118 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 21 00:12:52 crc kubenswrapper[5118]: I0121 00:12:52.197891 5118 reconciler_common.go:299] "Volume detached for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") on node \"crc\" DevicePath \"\"" Jan 21 00:12:52 crc kubenswrapper[5118]: I0121 00:12:52.412303 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Jan 21 00:12:52 crc kubenswrapper[5118]: I0121 00:12:52.413188 5118 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="fe7f2363564030a94c2b5aaa62bb4c132cd16d168cb151c5b3b9447b2e1be495" exitCode=0 Jan 21 00:12:52 crc kubenswrapper[5118]: I0121 00:12:52.413292 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 00:12:52 crc kubenswrapper[5118]: I0121 00:12:52.413328 5118 scope.go:117] "RemoveContainer" containerID="24f0d293f59538f561797000d7d6d3bf5cf65c588d7307bbc362958b5c993376" Jan 21 00:12:52 crc kubenswrapper[5118]: I0121 00:12:52.416355 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"f9b64d95-63f3-4084-bcf2-406ff2c75cee","Type":"ContainerDied","Data":"b332af179a85acb373a4603b4144a4dd23feddfed42479a5b7ab50c73a79c4cb"} Jan 21 00:12:52 crc kubenswrapper[5118]: I0121 00:12:52.416416 5118 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b332af179a85acb373a4603b4144a4dd23feddfed42479a5b7ab50c73a79c4cb" Jan 21 00:12:52 crc kubenswrapper[5118]: I0121 00:12:52.416466 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Jan 21 00:12:52 crc kubenswrapper[5118]: I0121 00:12:52.435103 5118 status_manager.go:895] "Failed to get status for pod" podUID="f9b64d95-63f3-4084-bcf2-406ff2c75cee" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 00:12:52 crc kubenswrapper[5118]: I0121 00:12:52.437070 5118 scope.go:117] "RemoveContainer" containerID="5420e1997e1749ea320776a71a52444ce92ddbd5ca9427f3c9efe5b951c27370" Jan 21 00:12:52 crc kubenswrapper[5118]: I0121 00:12:52.437097 5118 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 00:12:52 crc kubenswrapper[5118]: I0121 00:12:52.437472 5118 status_manager.go:895] "Failed to get status for pod" podUID="ae767afd-59d5-4c04-9ecc-f9ae7b317698" pod="openshift-image-registry/image-pruner-29482560-n7qwb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-pruner-29482560-n7qwb\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 00:12:52 crc kubenswrapper[5118]: I0121 00:12:52.441058 5118 status_manager.go:895] "Failed to get status for pod" podUID="f9b64d95-63f3-4084-bcf2-406ff2c75cee" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 00:12:52 crc kubenswrapper[5118]: I0121 00:12:52.441682 5118 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 00:12:52 crc kubenswrapper[5118]: I0121 00:12:52.442364 5118 status_manager.go:895] "Failed to get status for pod" podUID="ae767afd-59d5-4c04-9ecc-f9ae7b317698" pod="openshift-image-registry/image-pruner-29482560-n7qwb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-pruner-29482560-n7qwb\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 00:12:52 crc kubenswrapper[5118]: I0121 00:12:52.452144 5118 scope.go:117] "RemoveContainer" containerID="e4ebe604136e4069bea4c85be28e37c74101654b01f8f97f9dd6c5e6d2089369" Jan 21 00:12:52 crc kubenswrapper[5118]: I0121 00:12:52.464812 5118 scope.go:117] "RemoveContainer" containerID="958df862a38bf6ffb6e4b273a5f651ea46a717132f869a0fffedeb2ed05ad6a1" Jan 21 00:12:52 crc kubenswrapper[5118]: I0121 00:12:52.475923 5118 scope.go:117] "RemoveContainer" containerID="fe7f2363564030a94c2b5aaa62bb4c132cd16d168cb151c5b3b9447b2e1be495" Jan 21 00:12:52 crc kubenswrapper[5118]: I0121 00:12:52.487923 5118 scope.go:117] "RemoveContainer" containerID="fe33373ea281d91b409c5cb5d649102effdff8d81e629af326cf6ec982abd031" Jan 21 00:12:52 crc kubenswrapper[5118]: I0121 00:12:52.547779 5118 scope.go:117] "RemoveContainer" containerID="24f0d293f59538f561797000d7d6d3bf5cf65c588d7307bbc362958b5c993376" Jan 21 00:12:52 crc kubenswrapper[5118]: E0121 00:12:52.548286 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24f0d293f59538f561797000d7d6d3bf5cf65c588d7307bbc362958b5c993376\": container with ID starting with 24f0d293f59538f561797000d7d6d3bf5cf65c588d7307bbc362958b5c993376 not found: ID does not exist" containerID="24f0d293f59538f561797000d7d6d3bf5cf65c588d7307bbc362958b5c993376" Jan 21 00:12:52 crc kubenswrapper[5118]: I0121 00:12:52.548333 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24f0d293f59538f561797000d7d6d3bf5cf65c588d7307bbc362958b5c993376"} err="failed to get container status \"24f0d293f59538f561797000d7d6d3bf5cf65c588d7307bbc362958b5c993376\": rpc error: code = NotFound desc = could not find container \"24f0d293f59538f561797000d7d6d3bf5cf65c588d7307bbc362958b5c993376\": container with ID starting with 24f0d293f59538f561797000d7d6d3bf5cf65c588d7307bbc362958b5c993376 not found: ID does not exist" Jan 21 00:12:52 crc kubenswrapper[5118]: I0121 00:12:52.548360 5118 scope.go:117] "RemoveContainer" containerID="5420e1997e1749ea320776a71a52444ce92ddbd5ca9427f3c9efe5b951c27370" Jan 21 00:12:52 crc kubenswrapper[5118]: E0121 00:12:52.548691 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5420e1997e1749ea320776a71a52444ce92ddbd5ca9427f3c9efe5b951c27370\": container with ID starting with 5420e1997e1749ea320776a71a52444ce92ddbd5ca9427f3c9efe5b951c27370 not found: ID does not exist" containerID="5420e1997e1749ea320776a71a52444ce92ddbd5ca9427f3c9efe5b951c27370" Jan 21 00:12:52 crc kubenswrapper[5118]: I0121 00:12:52.548718 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5420e1997e1749ea320776a71a52444ce92ddbd5ca9427f3c9efe5b951c27370"} err="failed to get container status \"5420e1997e1749ea320776a71a52444ce92ddbd5ca9427f3c9efe5b951c27370\": rpc error: code = NotFound desc = could not find container \"5420e1997e1749ea320776a71a52444ce92ddbd5ca9427f3c9efe5b951c27370\": container with ID starting with 5420e1997e1749ea320776a71a52444ce92ddbd5ca9427f3c9efe5b951c27370 not found: ID does not exist" Jan 21 00:12:52 crc kubenswrapper[5118]: I0121 00:12:52.548731 5118 scope.go:117] "RemoveContainer" containerID="e4ebe604136e4069bea4c85be28e37c74101654b01f8f97f9dd6c5e6d2089369" Jan 21 00:12:52 crc kubenswrapper[5118]: E0121 00:12:52.549278 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4ebe604136e4069bea4c85be28e37c74101654b01f8f97f9dd6c5e6d2089369\": container with ID starting with e4ebe604136e4069bea4c85be28e37c74101654b01f8f97f9dd6c5e6d2089369 not found: ID does not exist" containerID="e4ebe604136e4069bea4c85be28e37c74101654b01f8f97f9dd6c5e6d2089369" Jan 21 00:12:52 crc kubenswrapper[5118]: I0121 00:12:52.549342 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4ebe604136e4069bea4c85be28e37c74101654b01f8f97f9dd6c5e6d2089369"} err="failed to get container status \"e4ebe604136e4069bea4c85be28e37c74101654b01f8f97f9dd6c5e6d2089369\": rpc error: code = NotFound desc = could not find container \"e4ebe604136e4069bea4c85be28e37c74101654b01f8f97f9dd6c5e6d2089369\": container with ID starting with e4ebe604136e4069bea4c85be28e37c74101654b01f8f97f9dd6c5e6d2089369 not found: ID does not exist" Jan 21 00:12:52 crc kubenswrapper[5118]: I0121 00:12:52.549416 5118 scope.go:117] "RemoveContainer" containerID="958df862a38bf6ffb6e4b273a5f651ea46a717132f869a0fffedeb2ed05ad6a1" Jan 21 00:12:52 crc kubenswrapper[5118]: E0121 00:12:52.549835 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"958df862a38bf6ffb6e4b273a5f651ea46a717132f869a0fffedeb2ed05ad6a1\": container with ID starting with 958df862a38bf6ffb6e4b273a5f651ea46a717132f869a0fffedeb2ed05ad6a1 not found: ID does not exist" containerID="958df862a38bf6ffb6e4b273a5f651ea46a717132f869a0fffedeb2ed05ad6a1" Jan 21 00:12:52 crc kubenswrapper[5118]: I0121 00:12:52.549877 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"958df862a38bf6ffb6e4b273a5f651ea46a717132f869a0fffedeb2ed05ad6a1"} err="failed to get container status \"958df862a38bf6ffb6e4b273a5f651ea46a717132f869a0fffedeb2ed05ad6a1\": rpc error: code = NotFound desc = could not find container \"958df862a38bf6ffb6e4b273a5f651ea46a717132f869a0fffedeb2ed05ad6a1\": container with ID starting with 958df862a38bf6ffb6e4b273a5f651ea46a717132f869a0fffedeb2ed05ad6a1 not found: ID does not exist" Jan 21 00:12:52 crc kubenswrapper[5118]: I0121 00:12:52.549906 5118 scope.go:117] "RemoveContainer" containerID="fe7f2363564030a94c2b5aaa62bb4c132cd16d168cb151c5b3b9447b2e1be495" Jan 21 00:12:52 crc kubenswrapper[5118]: E0121 00:12:52.550184 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe7f2363564030a94c2b5aaa62bb4c132cd16d168cb151c5b3b9447b2e1be495\": container with ID starting with fe7f2363564030a94c2b5aaa62bb4c132cd16d168cb151c5b3b9447b2e1be495 not found: ID does not exist" containerID="fe7f2363564030a94c2b5aaa62bb4c132cd16d168cb151c5b3b9447b2e1be495" Jan 21 00:12:52 crc kubenswrapper[5118]: I0121 00:12:52.550215 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe7f2363564030a94c2b5aaa62bb4c132cd16d168cb151c5b3b9447b2e1be495"} err="failed to get container status \"fe7f2363564030a94c2b5aaa62bb4c132cd16d168cb151c5b3b9447b2e1be495\": rpc error: code = NotFound desc = could not find container \"fe7f2363564030a94c2b5aaa62bb4c132cd16d168cb151c5b3b9447b2e1be495\": container with ID starting with fe7f2363564030a94c2b5aaa62bb4c132cd16d168cb151c5b3b9447b2e1be495 not found: ID does not exist" Jan 21 00:12:52 crc kubenswrapper[5118]: I0121 00:12:52.550233 5118 scope.go:117] "RemoveContainer" containerID="fe33373ea281d91b409c5cb5d649102effdff8d81e629af326cf6ec982abd031" Jan 21 00:12:52 crc kubenswrapper[5118]: E0121 00:12:52.550642 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe33373ea281d91b409c5cb5d649102effdff8d81e629af326cf6ec982abd031\": container with ID starting with fe33373ea281d91b409c5cb5d649102effdff8d81e629af326cf6ec982abd031 not found: ID does not exist" containerID="fe33373ea281d91b409c5cb5d649102effdff8d81e629af326cf6ec982abd031" Jan 21 00:12:52 crc kubenswrapper[5118]: I0121 00:12:52.550678 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe33373ea281d91b409c5cb5d649102effdff8d81e629af326cf6ec982abd031"} err="failed to get container status \"fe33373ea281d91b409c5cb5d649102effdff8d81e629af326cf6ec982abd031\": rpc error: code = NotFound desc = could not find container \"fe33373ea281d91b409c5cb5d649102effdff8d81e629af326cf6ec982abd031\": container with ID starting with fe33373ea281d91b409c5cb5d649102effdff8d81e629af326cf6ec982abd031 not found: ID does not exist" Jan 21 00:12:52 crc kubenswrapper[5118]: E0121 00:12:52.842026 5118 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.4:6443: connect: connection refused" interval="3.2s" Jan 21 00:12:52 crc kubenswrapper[5118]: I0121 00:12:52.982032 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a14caf222afb62aaabdc47808b6f944" path="/var/lib/kubelet/pods/3a14caf222afb62aaabdc47808b6f944/volumes" Jan 21 00:12:54 crc kubenswrapper[5118]: E0121 00:12:54.650900 5118 kubelet.go:3342] "Failed creating a mirror pod" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.129.56.4:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 00:12:54 crc kubenswrapper[5118]: I0121 00:12:54.651389 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 00:12:54 crc kubenswrapper[5118]: E0121 00:12:54.677902 5118 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.129.56.4:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188c96a1a67cc5b0 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f7dbc7e1ee9c187a863ef9b473fad27b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:12:54.676858288 +0000 UTC m=+230.001105306,LastTimestamp:2026-01-21 00:12:54.676858288 +0000 UTC m=+230.001105306,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:12:54 crc kubenswrapper[5118]: I0121 00:12:54.981977 5118 status_manager.go:895] "Failed to get status for pod" podUID="f9b64d95-63f3-4084-bcf2-406ff2c75cee" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 00:12:54 crc kubenswrapper[5118]: I0121 00:12:54.983312 5118 status_manager.go:895] "Failed to get status for pod" podUID="ae767afd-59d5-4c04-9ecc-f9ae7b317698" pod="openshift-image-registry/image-pruner-29482560-n7qwb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-pruner-29482560-n7qwb\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 00:12:55 crc kubenswrapper[5118]: I0121 00:12:55.440762 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f7dbc7e1ee9c187a863ef9b473fad27b","Type":"ContainerStarted","Data":"3fe13aafd185aefb89f6a2188e2b8b56dfb34cae764a5f7d9c326cca7e15f152"} Jan 21 00:12:55 crc kubenswrapper[5118]: I0121 00:12:55.440823 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f7dbc7e1ee9c187a863ef9b473fad27b","Type":"ContainerStarted","Data":"502bb0becc99ee54a65037f11445f898645baec491367a8a40fbcf417e76bb9a"} Jan 21 00:12:55 crc kubenswrapper[5118]: I0121 00:12:55.441064 5118 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 00:12:55 crc kubenswrapper[5118]: I0121 00:12:55.441369 5118 status_manager.go:895] "Failed to get status for pod" podUID="f9b64d95-63f3-4084-bcf2-406ff2c75cee" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 00:12:55 crc kubenswrapper[5118]: E0121 00:12:55.441553 5118 kubelet.go:3342] "Failed creating a mirror pod" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.129.56.4:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 00:12:55 crc kubenswrapper[5118]: I0121 00:12:55.441695 5118 status_manager.go:895] "Failed to get status for pod" podUID="ae767afd-59d5-4c04-9ecc-f9ae7b317698" pod="openshift-image-registry/image-pruner-29482560-n7qwb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-pruner-29482560-n7qwb\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 00:12:56 crc kubenswrapper[5118]: E0121 00:12:56.042989 5118 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.4:6443: connect: connection refused" interval="6.4s" Jan 21 00:13:00 crc kubenswrapper[5118]: E0121 00:13:00.011058 5118 desired_state_of_world_populator.go:305] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.129.56.4:6443: connect: connection refused" pod="openshift-image-registry/image-registry-66587d64c8-tlb84" volumeName="registry-storage" Jan 21 00:13:01 crc kubenswrapper[5118]: I0121 00:13:01.975827 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 00:13:01 crc kubenswrapper[5118]: I0121 00:13:01.986425 5118 status_manager.go:895] "Failed to get status for pod" podUID="ae767afd-59d5-4c04-9ecc-f9ae7b317698" pod="openshift-image-registry/image-pruner-29482560-n7qwb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-pruner-29482560-n7qwb\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 00:13:01 crc kubenswrapper[5118]: I0121 00:13:01.986909 5118 status_manager.go:895] "Failed to get status for pod" podUID="f9b64d95-63f3-4084-bcf2-406ff2c75cee" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 00:13:01 crc kubenswrapper[5118]: I0121 00:13:01.997723 5118 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="82b75e4d-eb03-4a0f-b349-9596c36b1f7d" Jan 21 00:13:01 crc kubenswrapper[5118]: I0121 00:13:01.997755 5118 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="82b75e4d-eb03-4a0f-b349-9596c36b1f7d" Jan 21 00:13:01 crc kubenswrapper[5118]: E0121 00:13:01.998564 5118 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 00:13:01 crc kubenswrapper[5118]: I0121 00:13:01.998906 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 00:13:02 crc kubenswrapper[5118]: E0121 00:13:02.445340 5118 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.4:6443: connect: connection refused" interval="7s" Jan 21 00:13:02 crc kubenswrapper[5118]: I0121 00:13:02.504839 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 21 00:13:02 crc kubenswrapper[5118]: I0121 00:13:02.504926 5118 generic.go:358] "Generic (PLEG): container finished" podID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerID="f71ef5ad6b3cecbe91cf0fa1e4e8759ddda878222a1c71e9801313336e424458" exitCode=1 Jan 21 00:13:02 crc kubenswrapper[5118]: I0121 00:13:02.504991 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerDied","Data":"f71ef5ad6b3cecbe91cf0fa1e4e8759ddda878222a1c71e9801313336e424458"} Jan 21 00:13:02 crc kubenswrapper[5118]: I0121 00:13:02.506114 5118 scope.go:117] "RemoveContainer" containerID="f71ef5ad6b3cecbe91cf0fa1e4e8759ddda878222a1c71e9801313336e424458" Jan 21 00:13:02 crc kubenswrapper[5118]: I0121 00:13:02.506305 5118 status_manager.go:895] "Failed to get status for pod" podUID="f9b64d95-63f3-4084-bcf2-406ff2c75cee" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 00:13:02 crc kubenswrapper[5118]: I0121 00:13:02.506929 5118 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 00:13:02 crc kubenswrapper[5118]: I0121 00:13:02.507132 5118 generic.go:358] "Generic (PLEG): container finished" podID="57755cc5f99000cc11e193051474d4e2" containerID="c4f4dc559ed0be37219b23219115e6d18a88ff7c0f37c59aabc758276a4ebe6f" exitCode=0 Jan 21 00:13:02 crc kubenswrapper[5118]: I0121 00:13:02.507208 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerDied","Data":"c4f4dc559ed0be37219b23219115e6d18a88ff7c0f37c59aabc758276a4ebe6f"} Jan 21 00:13:02 crc kubenswrapper[5118]: I0121 00:13:02.507241 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"85374358c12cf2082b6d181edef142a453aa82749657ceff50ee66c3c38e1b90"} Jan 21 00:13:02 crc kubenswrapper[5118]: I0121 00:13:02.507503 5118 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="82b75e4d-eb03-4a0f-b349-9596c36b1f7d" Jan 21 00:13:02 crc kubenswrapper[5118]: I0121 00:13:02.507524 5118 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="82b75e4d-eb03-4a0f-b349-9596c36b1f7d" Jan 21 00:13:02 crc kubenswrapper[5118]: I0121 00:13:02.507714 5118 status_manager.go:895] "Failed to get status for pod" podUID="ae767afd-59d5-4c04-9ecc-f9ae7b317698" pod="openshift-image-registry/image-pruner-29482560-n7qwb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-pruner-29482560-n7qwb\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 00:13:02 crc kubenswrapper[5118]: E0121 00:13:02.507948 5118 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 00:13:02 crc kubenswrapper[5118]: I0121 00:13:02.508683 5118 status_manager.go:895] "Failed to get status for pod" podUID="f9b64d95-63f3-4084-bcf2-406ff2c75cee" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 00:13:02 crc kubenswrapper[5118]: I0121 00:13:02.509027 5118 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 00:13:02 crc kubenswrapper[5118]: I0121 00:13:02.509699 5118 status_manager.go:895] "Failed to get status for pod" podUID="ae767afd-59d5-4c04-9ecc-f9ae7b317698" pod="openshift-image-registry/image-pruner-29482560-n7qwb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-pruner-29482560-n7qwb\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 00:13:02 crc kubenswrapper[5118]: E0121 00:13:02.533460 5118 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.129.56.4:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188c96a1a67cc5b0 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f7dbc7e1ee9c187a863ef9b473fad27b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 00:12:54.676858288 +0000 UTC m=+230.001105306,LastTimestamp:2026-01-21 00:12:54.676858288 +0000 UTC m=+230.001105306,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 00:13:03 crc kubenswrapper[5118]: I0121 00:13:03.516517 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 21 00:13:03 crc kubenswrapper[5118]: I0121 00:13:03.516875 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"55b7b5573c0c2881496737db927c0d6daa10b43a7325ab82eadacfe895d4722d"} Jan 21 00:13:03 crc kubenswrapper[5118]: I0121 00:13:03.519208 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"f44453ad91ecee21672cafc662f417edb062bb51fce83e572d13e9c8f735959f"} Jan 21 00:13:03 crc kubenswrapper[5118]: I0121 00:13:03.519239 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"7e1542e1dcb2fd95783229f2635b9286a667af6c650fc35452e22c0211ee0052"} Jan 21 00:13:03 crc kubenswrapper[5118]: I0121 00:13:03.801773 5118 patch_prober.go:28] interesting pod/machine-config-daemon-22r9n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 00:13:03 crc kubenswrapper[5118]: I0121 00:13:03.801854 5118 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 00:13:04 crc kubenswrapper[5118]: I0121 00:13:04.151953 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 00:13:04 crc kubenswrapper[5118]: I0121 00:13:04.963736 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 00:13:04 crc kubenswrapper[5118]: I0121 00:13:04.963977 5118 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 21 00:13:04 crc kubenswrapper[5118]: I0121 00:13:04.964113 5118 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 21 00:13:05 crc kubenswrapper[5118]: I0121 00:13:05.631979 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"59a57298a08913535c6d0b1ef25e7fb6cabb0686bbc12c37c7cbe14a9c491e8e"} Jan 21 00:13:05 crc kubenswrapper[5118]: I0121 00:13:05.632385 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"6777c833235ddb8266ef64fff26cfa6c9fd1b481344deecd5541638289054f6a"} Jan 21 00:13:06 crc kubenswrapper[5118]: I0121 00:13:06.640358 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"5ebf67e13b7e1de2071eefe3e98f542e886873968b45b886627ae494e171e15c"} Jan 21 00:13:06 crc kubenswrapper[5118]: I0121 00:13:06.640916 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 00:13:06 crc kubenswrapper[5118]: I0121 00:13:06.641029 5118 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="82b75e4d-eb03-4a0f-b349-9596c36b1f7d" Jan 21 00:13:06 crc kubenswrapper[5118]: I0121 00:13:06.641059 5118 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="82b75e4d-eb03-4a0f-b349-9596c36b1f7d" Jan 21 00:13:06 crc kubenswrapper[5118]: I0121 00:13:06.651508 5118 kubelet.go:3329] "Deleted mirror pod as it didn't match the static Pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 00:13:06 crc kubenswrapper[5118]: I0121 00:13:06.651542 5118 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 00:13:06 crc kubenswrapper[5118]: I0121 00:13:06.999273 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 00:13:06 crc kubenswrapper[5118]: I0121 00:13:06.999329 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 00:13:07 crc kubenswrapper[5118]: I0121 00:13:07.004919 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 00:13:07 crc kubenswrapper[5118]: I0121 00:13:07.645474 5118 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="82b75e4d-eb03-4a0f-b349-9596c36b1f7d" Jan 21 00:13:07 crc kubenswrapper[5118]: I0121 00:13:07.645515 5118 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="82b75e4d-eb03-4a0f-b349-9596c36b1f7d" Jan 21 00:13:07 crc kubenswrapper[5118]: I0121 00:13:07.649455 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 00:13:08 crc kubenswrapper[5118]: I0121 00:13:08.650107 5118 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="82b75e4d-eb03-4a0f-b349-9596c36b1f7d" Jan 21 00:13:08 crc kubenswrapper[5118]: I0121 00:13:08.650143 5118 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="82b75e4d-eb03-4a0f-b349-9596c36b1f7d" Jan 21 00:13:09 crc kubenswrapper[5118]: I0121 00:13:09.844746 5118 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="47c0676c-2d15-4c5a-818a-e9f6f2c3decd" Jan 21 00:13:14 crc kubenswrapper[5118]: I0121 00:13:14.151955 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 00:13:14 crc kubenswrapper[5118]: I0121 00:13:14.964494 5118 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 21 00:13:14 crc kubenswrapper[5118]: I0121 00:13:14.964583 5118 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 21 00:13:19 crc kubenswrapper[5118]: I0121 00:13:19.841833 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Jan 21 00:13:20 crc kubenswrapper[5118]: I0121 00:13:20.545204 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Jan 21 00:13:20 crc kubenswrapper[5118]: I0121 00:13:20.761022 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Jan 21 00:13:20 crc kubenswrapper[5118]: I0121 00:13:20.774020 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Jan 21 00:13:20 crc kubenswrapper[5118]: I0121 00:13:20.807736 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\"" Jan 21 00:13:20 crc kubenswrapper[5118]: I0121 00:13:20.886955 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\"" Jan 21 00:13:21 crc kubenswrapper[5118]: I0121 00:13:21.165650 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Jan 21 00:13:21 crc kubenswrapper[5118]: I0121 00:13:21.170639 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Jan 21 00:13:21 crc kubenswrapper[5118]: I0121 00:13:21.200353 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Jan 21 00:13:21 crc kubenswrapper[5118]: I0121 00:13:21.211632 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Jan 21 00:13:21 crc kubenswrapper[5118]: I0121 00:13:21.240592 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Jan 21 00:13:21 crc kubenswrapper[5118]: I0121 00:13:21.396590 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Jan 21 00:13:21 crc kubenswrapper[5118]: I0121 00:13:21.567360 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"metrics-tls\"" Jan 21 00:13:21 crc kubenswrapper[5118]: I0121 00:13:21.745627 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\"" Jan 21 00:13:22 crc kubenswrapper[5118]: I0121 00:13:22.194754 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Jan 21 00:13:22 crc kubenswrapper[5118]: I0121 00:13:22.366991 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Jan 21 00:13:22 crc kubenswrapper[5118]: I0121 00:13:22.383446 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Jan 21 00:13:22 crc kubenswrapper[5118]: I0121 00:13:22.588895 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-config\"" Jan 21 00:13:22 crc kubenswrapper[5118]: I0121 00:13:22.634764 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Jan 21 00:13:22 crc kubenswrapper[5118]: I0121 00:13:22.642357 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\"" Jan 21 00:13:22 crc kubenswrapper[5118]: I0121 00:13:22.725489 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Jan 21 00:13:22 crc kubenswrapper[5118]: I0121 00:13:22.755515 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Jan 21 00:13:22 crc kubenswrapper[5118]: I0121 00:13:22.772448 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Jan 21 00:13:22 crc kubenswrapper[5118]: I0121 00:13:22.860640 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Jan 21 00:13:22 crc kubenswrapper[5118]: I0121 00:13:22.900588 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\"" Jan 21 00:13:22 crc kubenswrapper[5118]: I0121 00:13:22.902820 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Jan 21 00:13:22 crc kubenswrapper[5118]: I0121 00:13:22.988035 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\"" Jan 21 00:13:23 crc kubenswrapper[5118]: I0121 00:13:23.045940 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Jan 21 00:13:23 crc kubenswrapper[5118]: I0121 00:13:23.180024 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\"" Jan 21 00:13:23 crc kubenswrapper[5118]: I0121 00:13:23.186367 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Jan 21 00:13:23 crc kubenswrapper[5118]: I0121 00:13:23.198945 5118 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Jan 21 00:13:23 crc kubenswrapper[5118]: I0121 00:13:23.204685 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 21 00:13:23 crc kubenswrapper[5118]: I0121 00:13:23.204798 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 21 00:13:23 crc kubenswrapper[5118]: I0121 00:13:23.209314 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 00:13:23 crc kubenswrapper[5118]: I0121 00:13:23.220827 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=17.220809438 podStartE2EDuration="17.220809438s" podCreationTimestamp="2026-01-21 00:13:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:13:23.220607823 +0000 UTC m=+258.544854871" watchObservedRunningTime="2026-01-21 00:13:23.220809438 +0000 UTC m=+258.545056446" Jan 21 00:13:23 crc kubenswrapper[5118]: I0121 00:13:23.283198 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\"" Jan 21 00:13:23 crc kubenswrapper[5118]: I0121 00:13:23.300544 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\"" Jan 21 00:13:23 crc kubenswrapper[5118]: I0121 00:13:23.319982 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Jan 21 00:13:23 crc kubenswrapper[5118]: I0121 00:13:23.447794 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"serving-cert\"" Jan 21 00:13:23 crc kubenswrapper[5118]: I0121 00:13:23.483472 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"kube-root-ca.crt\"" Jan 21 00:13:23 crc kubenswrapper[5118]: I0121 00:13:23.515307 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-session\"" Jan 21 00:13:23 crc kubenswrapper[5118]: I0121 00:13:23.612345 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Jan 21 00:13:23 crc kubenswrapper[5118]: I0121 00:13:23.673818 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Jan 21 00:13:23 crc kubenswrapper[5118]: I0121 00:13:23.815858 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Jan 21 00:13:23 crc kubenswrapper[5118]: I0121 00:13:23.882231 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\"" Jan 21 00:13:23 crc kubenswrapper[5118]: I0121 00:13:23.896417 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\"" Jan 21 00:13:23 crc kubenswrapper[5118]: I0121 00:13:23.977629 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"serving-cert\"" Jan 21 00:13:24 crc kubenswrapper[5118]: I0121 00:13:24.032044 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Jan 21 00:13:24 crc kubenswrapper[5118]: I0121 00:13:24.215367 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Jan 21 00:13:24 crc kubenswrapper[5118]: I0121 00:13:24.277289 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\"" Jan 21 00:13:24 crc kubenswrapper[5118]: I0121 00:13:24.304283 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Jan 21 00:13:24 crc kubenswrapper[5118]: I0121 00:13:24.349007 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\"" Jan 21 00:13:24 crc kubenswrapper[5118]: I0121 00:13:24.408038 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Jan 21 00:13:24 crc kubenswrapper[5118]: I0121 00:13:24.737825 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"etcd-client\"" Jan 21 00:13:24 crc kubenswrapper[5118]: I0121 00:13:24.758556 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Jan 21 00:13:24 crc kubenswrapper[5118]: I0121 00:13:24.806352 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Jan 21 00:13:24 crc kubenswrapper[5118]: I0121 00:13:24.919568 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Jan 21 00:13:24 crc kubenswrapper[5118]: I0121 00:13:24.967838 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Jan 21 00:13:24 crc kubenswrapper[5118]: I0121 00:13:24.968857 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 00:13:24 crc kubenswrapper[5118]: I0121 00:13:24.990723 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 00:13:25 crc kubenswrapper[5118]: I0121 00:13:25.235789 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"audit-1\"" Jan 21 00:13:25 crc kubenswrapper[5118]: I0121 00:13:25.289367 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\"" Jan 21 00:13:25 crc kubenswrapper[5118]: I0121 00:13:25.308140 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Jan 21 00:13:25 crc kubenswrapper[5118]: I0121 00:13:25.412483 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Jan 21 00:13:25 crc kubenswrapper[5118]: I0121 00:13:25.472565 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Jan 21 00:13:25 crc kubenswrapper[5118]: I0121 00:13:25.481475 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Jan 21 00:13:25 crc kubenswrapper[5118]: I0121 00:13:25.521553 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Jan 21 00:13:25 crc kubenswrapper[5118]: I0121 00:13:25.654717 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"proxy-tls\"" Jan 21 00:13:25 crc kubenswrapper[5118]: I0121 00:13:25.672513 5118 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Jan 21 00:13:25 crc kubenswrapper[5118]: I0121 00:13:25.693259 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Jan 21 00:13:25 crc kubenswrapper[5118]: I0121 00:13:25.792535 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Jan 21 00:13:25 crc kubenswrapper[5118]: I0121 00:13:25.859047 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Jan 21 00:13:25 crc kubenswrapper[5118]: I0121 00:13:25.909583 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"audit\"" Jan 21 00:13:25 crc kubenswrapper[5118]: I0121 00:13:25.992647 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-nl8tp\"" Jan 21 00:13:26 crc kubenswrapper[5118]: I0121 00:13:26.005560 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\"" Jan 21 00:13:26 crc kubenswrapper[5118]: I0121 00:13:26.118867 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Jan 21 00:13:26 crc kubenswrapper[5118]: I0121 00:13:26.139963 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Jan 21 00:13:26 crc kubenswrapper[5118]: I0121 00:13:26.154286 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-images\"" Jan 21 00:13:26 crc kubenswrapper[5118]: I0121 00:13:26.236203 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\"" Jan 21 00:13:26 crc kubenswrapper[5118]: I0121 00:13:26.255583 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Jan 21 00:13:26 crc kubenswrapper[5118]: I0121 00:13:26.339215 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Jan 21 00:13:26 crc kubenswrapper[5118]: I0121 00:13:26.377234 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Jan 21 00:13:26 crc kubenswrapper[5118]: I0121 00:13:26.391408 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Jan 21 00:13:26 crc kubenswrapper[5118]: I0121 00:13:26.441802 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"encryption-config-1\"" Jan 21 00:13:26 crc kubenswrapper[5118]: I0121 00:13:26.505587 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-tls\"" Jan 21 00:13:26 crc kubenswrapper[5118]: I0121 00:13:26.547852 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Jan 21 00:13:26 crc kubenswrapper[5118]: I0121 00:13:26.606751 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\"" Jan 21 00:13:26 crc kubenswrapper[5118]: I0121 00:13:26.805701 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Jan 21 00:13:26 crc kubenswrapper[5118]: I0121 00:13:26.816843 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Jan 21 00:13:26 crc kubenswrapper[5118]: I0121 00:13:26.838856 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\"" Jan 21 00:13:26 crc kubenswrapper[5118]: I0121 00:13:26.884836 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Jan 21 00:13:26 crc kubenswrapper[5118]: I0121 00:13:26.904185 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Jan 21 00:13:26 crc kubenswrapper[5118]: I0121 00:13:26.928023 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Jan 21 00:13:26 crc kubenswrapper[5118]: I0121 00:13:26.943804 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Jan 21 00:13:26 crc kubenswrapper[5118]: I0121 00:13:26.986673 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"default-dockercfg-hqpm5\"" Jan 21 00:13:27 crc kubenswrapper[5118]: I0121 00:13:27.016920 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Jan 21 00:13:27 crc kubenswrapper[5118]: I0121 00:13:27.055134 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Jan 21 00:13:27 crc kubenswrapper[5118]: I0121 00:13:27.056515 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"etcd-serving-ca\"" Jan 21 00:13:27 crc kubenswrapper[5118]: I0121 00:13:27.241134 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Jan 21 00:13:27 crc kubenswrapper[5118]: I0121 00:13:27.275812 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Jan 21 00:13:27 crc kubenswrapper[5118]: I0121 00:13:27.281945 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\"" Jan 21 00:13:27 crc kubenswrapper[5118]: I0121 00:13:27.309509 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Jan 21 00:13:27 crc kubenswrapper[5118]: I0121 00:13:27.329059 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Jan 21 00:13:27 crc kubenswrapper[5118]: I0121 00:13:27.344898 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\"" Jan 21 00:13:27 crc kubenswrapper[5118]: I0121 00:13:27.348479 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Jan 21 00:13:27 crc kubenswrapper[5118]: I0121 00:13:27.481543 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\"" Jan 21 00:13:27 crc kubenswrapper[5118]: I0121 00:13:27.562361 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"image-import-ca\"" Jan 21 00:13:27 crc kubenswrapper[5118]: I0121 00:13:27.702097 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Jan 21 00:13:27 crc kubenswrapper[5118]: I0121 00:13:27.735746 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\"" Jan 21 00:13:27 crc kubenswrapper[5118]: I0121 00:13:27.747082 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"audit-1\"" Jan 21 00:13:27 crc kubenswrapper[5118]: I0121 00:13:27.755875 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\"" Jan 21 00:13:27 crc kubenswrapper[5118]: I0121 00:13:27.841221 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-nwglk\"" Jan 21 00:13:27 crc kubenswrapper[5118]: I0121 00:13:27.884575 5118 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Jan 21 00:13:27 crc kubenswrapper[5118]: I0121 00:13:27.988018 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Jan 21 00:13:28 crc kubenswrapper[5118]: I0121 00:13:28.000811 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"service-ca-bundle\"" Jan 21 00:13:28 crc kubenswrapper[5118]: I0121 00:13:28.075538 5118 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Jan 21 00:13:28 crc kubenswrapper[5118]: I0121 00:13:28.115957 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Jan 21 00:13:28 crc kubenswrapper[5118]: I0121 00:13:28.145976 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Jan 21 00:13:28 crc kubenswrapper[5118]: I0121 00:13:28.293297 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Jan 21 00:13:28 crc kubenswrapper[5118]: I0121 00:13:28.309817 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Jan 21 00:13:28 crc kubenswrapper[5118]: I0121 00:13:28.320869 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Jan 21 00:13:28 crc kubenswrapper[5118]: I0121 00:13:28.357292 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\"" Jan 21 00:13:28 crc kubenswrapper[5118]: I0121 00:13:28.428868 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Jan 21 00:13:28 crc kubenswrapper[5118]: I0121 00:13:28.483621 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"serving-cert\"" Jan 21 00:13:28 crc kubenswrapper[5118]: I0121 00:13:28.484243 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Jan 21 00:13:28 crc kubenswrapper[5118]: I0121 00:13:28.587866 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\"" Jan 21 00:13:28 crc kubenswrapper[5118]: I0121 00:13:28.678563 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"trusted-ca-bundle\"" Jan 21 00:13:28 crc kubenswrapper[5118]: I0121 00:13:28.733634 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Jan 21 00:13:28 crc kubenswrapper[5118]: I0121 00:13:28.787430 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Jan 21 00:13:28 crc kubenswrapper[5118]: I0121 00:13:28.851141 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\"" Jan 21 00:13:28 crc kubenswrapper[5118]: I0121 00:13:28.880464 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\"" Jan 21 00:13:28 crc kubenswrapper[5118]: I0121 00:13:28.979712 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Jan 21 00:13:29 crc kubenswrapper[5118]: I0121 00:13:29.050580 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Jan 21 00:13:29 crc kubenswrapper[5118]: I0121 00:13:29.155527 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Jan 21 00:13:29 crc kubenswrapper[5118]: I0121 00:13:29.247891 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\"" Jan 21 00:13:29 crc kubenswrapper[5118]: I0121 00:13:29.273408 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Jan 21 00:13:29 crc kubenswrapper[5118]: I0121 00:13:29.344136 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Jan 21 00:13:29 crc kubenswrapper[5118]: I0121 00:13:29.517756 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Jan 21 00:13:29 crc kubenswrapper[5118]: I0121 00:13:29.519588 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Jan 21 00:13:29 crc kubenswrapper[5118]: I0121 00:13:29.523887 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Jan 21 00:13:29 crc kubenswrapper[5118]: I0121 00:13:29.531550 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Jan 21 00:13:29 crc kubenswrapper[5118]: I0121 00:13:29.624485 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Jan 21 00:13:29 crc kubenswrapper[5118]: I0121 00:13:29.642365 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\"" Jan 21 00:13:29 crc kubenswrapper[5118]: I0121 00:13:29.695936 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-rbac-proxy\"" Jan 21 00:13:29 crc kubenswrapper[5118]: I0121 00:13:29.700952 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Jan 21 00:13:29 crc kubenswrapper[5118]: I0121 00:13:29.709259 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Jan 21 00:13:29 crc kubenswrapper[5118]: I0121 00:13:29.753884 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\"" Jan 21 00:13:29 crc kubenswrapper[5118]: I0121 00:13:29.756180 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Jan 21 00:13:29 crc kubenswrapper[5118]: I0121 00:13:29.857302 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Jan 21 00:13:29 crc kubenswrapper[5118]: I0121 00:13:29.858268 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-4zqgh\"" Jan 21 00:13:29 crc kubenswrapper[5118]: I0121 00:13:29.873444 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"config-operator-serving-cert\"" Jan 21 00:13:29 crc kubenswrapper[5118]: I0121 00:13:29.879482 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Jan 21 00:13:29 crc kubenswrapper[5118]: I0121 00:13:29.916101 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\"" Jan 21 00:13:29 crc kubenswrapper[5118]: I0121 00:13:29.976829 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Jan 21 00:13:30 crc kubenswrapper[5118]: I0121 00:13:30.089283 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Jan 21 00:13:30 crc kubenswrapper[5118]: I0121 00:13:30.104913 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Jan 21 00:13:30 crc kubenswrapper[5118]: I0121 00:13:30.172258 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Jan 21 00:13:30 crc kubenswrapper[5118]: I0121 00:13:30.216047 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\"" Jan 21 00:13:30 crc kubenswrapper[5118]: I0121 00:13:30.325201 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Jan 21 00:13:30 crc kubenswrapper[5118]: I0121 00:13:30.332100 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Jan 21 00:13:30 crc kubenswrapper[5118]: I0121 00:13:30.348789 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Jan 21 00:13:30 crc kubenswrapper[5118]: I0121 00:13:30.353882 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\"" Jan 21 00:13:30 crc kubenswrapper[5118]: I0121 00:13:30.361726 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Jan 21 00:13:30 crc kubenswrapper[5118]: I0121 00:13:30.377653 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Jan 21 00:13:30 crc kubenswrapper[5118]: I0121 00:13:30.389488 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\"" Jan 21 00:13:30 crc kubenswrapper[5118]: I0121 00:13:30.508957 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Jan 21 00:13:30 crc kubenswrapper[5118]: I0121 00:13:30.529771 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\"" Jan 21 00:13:30 crc kubenswrapper[5118]: I0121 00:13:30.530176 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Jan 21 00:13:30 crc kubenswrapper[5118]: I0121 00:13:30.602355 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\"" Jan 21 00:13:30 crc kubenswrapper[5118]: I0121 00:13:30.602575 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Jan 21 00:13:30 crc kubenswrapper[5118]: I0121 00:13:30.603379 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Jan 21 00:13:30 crc kubenswrapper[5118]: I0121 00:13:30.608765 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Jan 21 00:13:30 crc kubenswrapper[5118]: I0121 00:13:30.634995 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Jan 21 00:13:30 crc kubenswrapper[5118]: I0121 00:13:30.721281 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\"" Jan 21 00:13:30 crc kubenswrapper[5118]: I0121 00:13:30.824427 5118 kubelet.go:2547] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 21 00:13:30 crc kubenswrapper[5118]: I0121 00:13:30.824678 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" containerID="cri-o://3fe13aafd185aefb89f6a2188e2b8b56dfb34cae764a5f7d9c326cca7e15f152" gracePeriod=5 Jan 21 00:13:30 crc kubenswrapper[5118]: I0121 00:13:30.854426 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Jan 21 00:13:30 crc kubenswrapper[5118]: I0121 00:13:30.867391 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Jan 21 00:13:30 crc kubenswrapper[5118]: I0121 00:13:30.867500 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-operator-tls\"" Jan 21 00:13:30 crc kubenswrapper[5118]: I0121 00:13:30.867850 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-admission-controller-secret\"" Jan 21 00:13:30 crc kubenswrapper[5118]: I0121 00:13:30.873564 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Jan 21 00:13:30 crc kubenswrapper[5118]: I0121 00:13:30.923330 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Jan 21 00:13:31 crc kubenswrapper[5118]: I0121 00:13:31.088110 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Jan 21 00:13:31 crc kubenswrapper[5118]: I0121 00:13:31.096274 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\"" Jan 21 00:13:31 crc kubenswrapper[5118]: I0121 00:13:31.183580 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Jan 21 00:13:31 crc kubenswrapper[5118]: I0121 00:13:31.192767 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Jan 21 00:13:31 crc kubenswrapper[5118]: I0121 00:13:31.214895 5118 ???:1] "http: TLS handshake error from 192.168.126.11:44964: no serving certificate available for the kubelet" Jan 21 00:13:31 crc kubenswrapper[5118]: I0121 00:13:31.264630 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Jan 21 00:13:31 crc kubenswrapper[5118]: I0121 00:13:31.298289 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Jan 21 00:13:31 crc kubenswrapper[5118]: I0121 00:13:31.336108 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Jan 21 00:13:31 crc kubenswrapper[5118]: I0121 00:13:31.368034 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Jan 21 00:13:31 crc kubenswrapper[5118]: I0121 00:13:31.427286 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\"" Jan 21 00:13:31 crc kubenswrapper[5118]: I0121 00:13:31.447683 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"trusted-ca\"" Jan 21 00:13:31 crc kubenswrapper[5118]: I0121 00:13:31.472013 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-client\"" Jan 21 00:13:31 crc kubenswrapper[5118]: I0121 00:13:31.541767 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\"" Jan 21 00:13:31 crc kubenswrapper[5118]: I0121 00:13:31.565101 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\"" Jan 21 00:13:31 crc kubenswrapper[5118]: I0121 00:13:31.572946 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" Jan 21 00:13:31 crc kubenswrapper[5118]: I0121 00:13:31.625673 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-g6kgg\"" Jan 21 00:13:31 crc kubenswrapper[5118]: I0121 00:13:31.630444 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-tjs74\"" Jan 21 00:13:31 crc kubenswrapper[5118]: I0121 00:13:31.633489 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"env-overrides\"" Jan 21 00:13:31 crc kubenswrapper[5118]: I0121 00:13:31.669261 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-client\"" Jan 21 00:13:31 crc kubenswrapper[5118]: I0121 00:13:31.695003 5118 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Jan 21 00:13:31 crc kubenswrapper[5118]: I0121 00:13:31.708641 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-tk7bt\"" Jan 21 00:13:31 crc kubenswrapper[5118]: I0121 00:13:31.980780 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"metrics-tls\"" Jan 21 00:13:31 crc kubenswrapper[5118]: I0121 00:13:31.992665 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Jan 21 00:13:31 crc kubenswrapper[5118]: I0121 00:13:31.995932 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Jan 21 00:13:32 crc kubenswrapper[5118]: I0121 00:13:32.032857 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Jan 21 00:13:32 crc kubenswrapper[5118]: I0121 00:13:32.043048 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\"" Jan 21 00:13:32 crc kubenswrapper[5118]: I0121 00:13:32.135612 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Jan 21 00:13:32 crc kubenswrapper[5118]: I0121 00:13:32.167569 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Jan 21 00:13:32 crc kubenswrapper[5118]: I0121 00:13:32.426396 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"metrics-tls\"" Jan 21 00:13:32 crc kubenswrapper[5118]: I0121 00:13:32.522879 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Jan 21 00:13:32 crc kubenswrapper[5118]: I0121 00:13:32.586061 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Jan 21 00:13:32 crc kubenswrapper[5118]: I0121 00:13:32.633828 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Jan 21 00:13:32 crc kubenswrapper[5118]: I0121 00:13:32.690588 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Jan 21 00:13:32 crc kubenswrapper[5118]: I0121 00:13:32.739609 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Jan 21 00:13:32 crc kubenswrapper[5118]: I0121 00:13:32.911912 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-l2v2m\"" Jan 21 00:13:33 crc kubenswrapper[5118]: I0121 00:13:33.052415 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\"" Jan 21 00:13:33 crc kubenswrapper[5118]: I0121 00:13:33.101564 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\"" Jan 21 00:13:33 crc kubenswrapper[5118]: I0121 00:13:33.103223 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Jan 21 00:13:33 crc kubenswrapper[5118]: I0121 00:13:33.109543 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Jan 21 00:13:33 crc kubenswrapper[5118]: I0121 00:13:33.137578 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\"" Jan 21 00:13:33 crc kubenswrapper[5118]: I0121 00:13:33.157961 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\"" Jan 21 00:13:33 crc kubenswrapper[5118]: I0121 00:13:33.216898 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Jan 21 00:13:33 crc kubenswrapper[5118]: I0121 00:13:33.247248 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Jan 21 00:13:33 crc kubenswrapper[5118]: I0121 00:13:33.311641 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Jan 21 00:13:33 crc kubenswrapper[5118]: I0121 00:13:33.353135 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Jan 21 00:13:33 crc kubenswrapper[5118]: I0121 00:13:33.393107 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Jan 21 00:13:33 crc kubenswrapper[5118]: I0121 00:13:33.400182 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Jan 21 00:13:33 crc kubenswrapper[5118]: I0121 00:13:33.455507 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Jan 21 00:13:33 crc kubenswrapper[5118]: I0121 00:13:33.532454 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\"" Jan 21 00:13:33 crc kubenswrapper[5118]: I0121 00:13:33.535722 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Jan 21 00:13:33 crc kubenswrapper[5118]: I0121 00:13:33.542017 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"config\"" Jan 21 00:13:33 crc kubenswrapper[5118]: I0121 00:13:33.801211 5118 patch_prober.go:28] interesting pod/machine-config-daemon-22r9n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 00:13:33 crc kubenswrapper[5118]: I0121 00:13:33.801523 5118 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 00:13:33 crc kubenswrapper[5118]: I0121 00:13:33.844499 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\"" Jan 21 00:13:33 crc kubenswrapper[5118]: I0121 00:13:33.858809 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Jan 21 00:13:33 crc kubenswrapper[5118]: I0121 00:13:33.888303 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\"" Jan 21 00:13:33 crc kubenswrapper[5118]: I0121 00:13:33.968585 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Jan 21 00:13:34 crc kubenswrapper[5118]: I0121 00:13:34.100294 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Jan 21 00:13:34 crc kubenswrapper[5118]: I0121 00:13:34.156191 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Jan 21 00:13:34 crc kubenswrapper[5118]: I0121 00:13:34.353516 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Jan 21 00:13:34 crc kubenswrapper[5118]: I0121 00:13:34.379805 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Jan 21 00:13:34 crc kubenswrapper[5118]: I0121 00:13:34.488365 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Jan 21 00:13:34 crc kubenswrapper[5118]: I0121 00:13:34.549119 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Jan 21 00:13:34 crc kubenswrapper[5118]: I0121 00:13:34.575172 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Jan 21 00:13:35 crc kubenswrapper[5118]: I0121 00:13:35.202829 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Jan 21 00:13:35 crc kubenswrapper[5118]: I0121 00:13:35.264403 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Jan 21 00:13:35 crc kubenswrapper[5118]: I0121 00:13:35.313508 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Jan 21 00:13:35 crc kubenswrapper[5118]: I0121 00:13:35.512526 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Jan 21 00:13:35 crc kubenswrapper[5118]: I0121 00:13:35.660487 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-config\"" Jan 21 00:13:35 crc kubenswrapper[5118]: I0121 00:13:35.710479 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-w9nzh\"" Jan 21 00:13:35 crc kubenswrapper[5118]: I0121 00:13:35.845224 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Jan 21 00:13:35 crc kubenswrapper[5118]: I0121 00:13:35.854816 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Jan 21 00:13:35 crc kubenswrapper[5118]: I0121 00:13:35.883676 5118 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Jan 21 00:13:35 crc kubenswrapper[5118]: I0121 00:13:35.915412 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Jan 21 00:13:35 crc kubenswrapper[5118]: I0121 00:13:35.917661 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Jan 21 00:13:36 crc kubenswrapper[5118]: I0121 00:13:36.216689 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Jan 21 00:13:36 crc kubenswrapper[5118]: I0121 00:13:36.252085 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\"" Jan 21 00:13:36 crc kubenswrapper[5118]: I0121 00:13:36.396857 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Jan 21 00:13:36 crc kubenswrapper[5118]: I0121 00:13:36.396926 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 00:13:36 crc kubenswrapper[5118]: I0121 00:13:36.398383 5118 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Jan 21 00:13:36 crc kubenswrapper[5118]: I0121 00:13:36.472404 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Jan 21 00:13:36 crc kubenswrapper[5118]: I0121 00:13:36.472463 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Jan 21 00:13:36 crc kubenswrapper[5118]: I0121 00:13:36.472502 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Jan 21 00:13:36 crc kubenswrapper[5118]: I0121 00:13:36.472498 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests" (OuterVolumeSpecName: "manifests") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 00:13:36 crc kubenswrapper[5118]: I0121 00:13:36.472525 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Jan 21 00:13:36 crc kubenswrapper[5118]: I0121 00:13:36.472545 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log" (OuterVolumeSpecName: "var-log") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 00:13:36 crc kubenswrapper[5118]: I0121 00:13:36.472546 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Jan 21 00:13:36 crc kubenswrapper[5118]: I0121 00:13:36.472621 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 00:13:36 crc kubenswrapper[5118]: I0121 00:13:36.472652 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock" (OuterVolumeSpecName: "var-lock") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 00:13:36 crc kubenswrapper[5118]: I0121 00:13:36.472999 5118 reconciler_common.go:299] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") on node \"crc\" DevicePath \"\"" Jan 21 00:13:36 crc kubenswrapper[5118]: I0121 00:13:36.473025 5118 reconciler_common.go:299] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") on node \"crc\" DevicePath \"\"" Jan 21 00:13:36 crc kubenswrapper[5118]: I0121 00:13:36.473036 5118 reconciler_common.go:299] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 21 00:13:36 crc kubenswrapper[5118]: I0121 00:13:36.473047 5118 reconciler_common.go:299] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") on node \"crc\" DevicePath \"\"" Jan 21 00:13:36 crc kubenswrapper[5118]: I0121 00:13:36.483268 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 00:13:36 crc kubenswrapper[5118]: I0121 00:13:36.574387 5118 reconciler_common.go:299] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 21 00:13:36 crc kubenswrapper[5118]: I0121 00:13:36.793052 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\"" Jan 21 00:13:36 crc kubenswrapper[5118]: I0121 00:13:36.797717 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Jan 21 00:13:36 crc kubenswrapper[5118]: I0121 00:13:36.797753 5118 generic.go:358] "Generic (PLEG): container finished" podID="f7dbc7e1ee9c187a863ef9b473fad27b" containerID="3fe13aafd185aefb89f6a2188e2b8b56dfb34cae764a5f7d9c326cca7e15f152" exitCode=137 Jan 21 00:13:36 crc kubenswrapper[5118]: I0121 00:13:36.797861 5118 scope.go:117] "RemoveContainer" containerID="3fe13aafd185aefb89f6a2188e2b8b56dfb34cae764a5f7d9c326cca7e15f152" Jan 21 00:13:36 crc kubenswrapper[5118]: I0121 00:13:36.797963 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 00:13:36 crc kubenswrapper[5118]: I0121 00:13:36.810387 5118 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Jan 21 00:13:36 crc kubenswrapper[5118]: I0121 00:13:36.814577 5118 scope.go:117] "RemoveContainer" containerID="3fe13aafd185aefb89f6a2188e2b8b56dfb34cae764a5f7d9c326cca7e15f152" Jan 21 00:13:36 crc kubenswrapper[5118]: E0121 00:13:36.814966 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3fe13aafd185aefb89f6a2188e2b8b56dfb34cae764a5f7d9c326cca7e15f152\": container with ID starting with 3fe13aafd185aefb89f6a2188e2b8b56dfb34cae764a5f7d9c326cca7e15f152 not found: ID does not exist" containerID="3fe13aafd185aefb89f6a2188e2b8b56dfb34cae764a5f7d9c326cca7e15f152" Jan 21 00:13:36 crc kubenswrapper[5118]: I0121 00:13:36.814997 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fe13aafd185aefb89f6a2188e2b8b56dfb34cae764a5f7d9c326cca7e15f152"} err="failed to get container status \"3fe13aafd185aefb89f6a2188e2b8b56dfb34cae764a5f7d9c326cca7e15f152\": rpc error: code = NotFound desc = could not find container \"3fe13aafd185aefb89f6a2188e2b8b56dfb34cae764a5f7d9c326cca7e15f152\": container with ID starting with 3fe13aafd185aefb89f6a2188e2b8b56dfb34cae764a5f7d9c326cca7e15f152 not found: ID does not exist" Jan 21 00:13:36 crc kubenswrapper[5118]: I0121 00:13:36.885672 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\"" Jan 21 00:13:36 crc kubenswrapper[5118]: I0121 00:13:36.981226 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" path="/var/lib/kubelet/pods/f7dbc7e1ee9c187a863ef9b473fad27b/volumes" Jan 21 00:13:37 crc kubenswrapper[5118]: I0121 00:13:37.154608 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Jan 21 00:13:37 crc kubenswrapper[5118]: I0121 00:13:37.313942 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Jan 21 00:13:38 crc kubenswrapper[5118]: I0121 00:13:38.760646 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Jan 21 00:13:39 crc kubenswrapper[5118]: I0121 00:13:39.751177 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-certs-default\"" Jan 21 00:13:51 crc kubenswrapper[5118]: I0121 00:13:51.880884 5118 generic.go:358] "Generic (PLEG): container finished" podID="a651b3d8-5a1f-4da5-8d11-e6e3b1ef5d00" containerID="3ff85f1d6300e9395787d48e93f1c0f2a1727898f093606856ee28c33b663611" exitCode=0 Jan 21 00:13:51 crc kubenswrapper[5118]: I0121 00:13:51.880974 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-5r9pr" event={"ID":"a651b3d8-5a1f-4da5-8d11-e6e3b1ef5d00","Type":"ContainerDied","Data":"3ff85f1d6300e9395787d48e93f1c0f2a1727898f093606856ee28c33b663611"} Jan 21 00:13:51 crc kubenswrapper[5118]: I0121 00:13:51.882121 5118 scope.go:117] "RemoveContainer" containerID="3ff85f1d6300e9395787d48e93f1c0f2a1727898f093606856ee28c33b663611" Jan 21 00:13:52 crc kubenswrapper[5118]: I0121 00:13:52.261969 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-5r9pr" Jan 21 00:13:52 crc kubenswrapper[5118]: I0121 00:13:52.890015 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-5r9pr" event={"ID":"a651b3d8-5a1f-4da5-8d11-e6e3b1ef5d00","Type":"ContainerStarted","Data":"7fc80a9d862859bc887a15b3aa15cd37e9cfe4c7c11072a59fada4a5f9114766"} Jan 21 00:13:52 crc kubenswrapper[5118]: I0121 00:13:52.890504 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-5r9pr" Jan 21 00:13:52 crc kubenswrapper[5118]: I0121 00:13:52.891883 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-5r9pr" Jan 21 00:14:03 crc kubenswrapper[5118]: I0121 00:14:03.181616 5118 ???:1] "http: TLS handshake error from 192.168.126.11:41486: no serving certificate available for the kubelet" Jan 21 00:14:03 crc kubenswrapper[5118]: I0121 00:14:03.801115 5118 patch_prober.go:28] interesting pod/machine-config-daemon-22r9n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 00:14:03 crc kubenswrapper[5118]: I0121 00:14:03.801209 5118 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 00:14:03 crc kubenswrapper[5118]: I0121 00:14:03.801254 5118 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" Jan 21 00:14:03 crc kubenswrapper[5118]: I0121 00:14:03.801832 5118 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ebce512679b1ac6a1172cf6df51d1cdffd5fd6e643bd11e70ffe7482570cd359"} pod="openshift-machine-config-operator/machine-config-daemon-22r9n" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 00:14:03 crc kubenswrapper[5118]: I0121 00:14:03.801894 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" containerName="machine-config-daemon" containerID="cri-o://ebce512679b1ac6a1172cf6df51d1cdffd5fd6e643bd11e70ffe7482570cd359" gracePeriod=600 Jan 21 00:14:03 crc kubenswrapper[5118]: I0121 00:14:03.951299 5118 generic.go:358] "Generic (PLEG): container finished" podID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" containerID="ebce512679b1ac6a1172cf6df51d1cdffd5fd6e643bd11e70ffe7482570cd359" exitCode=0 Jan 21 00:14:03 crc kubenswrapper[5118]: I0121 00:14:03.951390 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" event={"ID":"44eb9bc7-60a3-421c-bf5e-d1d9a5026435","Type":"ContainerDied","Data":"ebce512679b1ac6a1172cf6df51d1cdffd5fd6e643bd11e70ffe7482570cd359"} Jan 21 00:14:04 crc kubenswrapper[5118]: I0121 00:14:04.957634 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" event={"ID":"44eb9bc7-60a3-421c-bf5e-d1d9a5026435","Type":"ContainerStarted","Data":"92f94cff427bbfd2ea80a4772b8465005fc945125ca4b7e3c490d52f65cdb761"} Jan 21 00:14:05 crc kubenswrapper[5118]: I0121 00:14:05.103010 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 21 00:14:05 crc kubenswrapper[5118]: I0121 00:14:05.103026 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 21 00:14:05 crc kubenswrapper[5118]: I0121 00:14:05.724525 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-trbkq"] Jan 21 00:14:05 crc kubenswrapper[5118]: I0121 00:14:05.724855 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-65b6cccf98-trbkq" podUID="c1f2bd9d-a01b-4672-b4b1-f88057b52f08" containerName="controller-manager" containerID="cri-o://a994dad9ee702ae7f09b2f2d20f3829fe75ed2d0e6c5a5e9f9644eb3d04682f7" gracePeriod=30 Jan 21 00:14:05 crc kubenswrapper[5118]: I0121 00:14:05.764723 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-s55xm"] Jan 21 00:14:05 crc kubenswrapper[5118]: I0121 00:14:05.765018 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-s55xm" podUID="1968a714-512b-40f9-a302-f8905b0855fd" containerName="route-controller-manager" containerID="cri-o://8fea837de3e84be13255f5a57032997f075300f5a76a37e708bdcd543664c862" gracePeriod=30 Jan 21 00:14:05 crc kubenswrapper[5118]: I0121 00:14:05.783710 5118 patch_prober.go:28] interesting pod/route-controller-manager-776cdc94d6-s55xm container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Jan 21 00:14:05 crc kubenswrapper[5118]: I0121 00:14:05.783788 5118 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-s55xm" podUID="1968a714-512b-40f9-a302-f8905b0855fd" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Jan 21 00:14:05 crc kubenswrapper[5118]: I0121 00:14:05.968076 5118 generic.go:358] "Generic (PLEG): container finished" podID="c1f2bd9d-a01b-4672-b4b1-f88057b52f08" containerID="a994dad9ee702ae7f09b2f2d20f3829fe75ed2d0e6c5a5e9f9644eb3d04682f7" exitCode=0 Jan 21 00:14:05 crc kubenswrapper[5118]: I0121 00:14:05.968214 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-trbkq" event={"ID":"c1f2bd9d-a01b-4672-b4b1-f88057b52f08","Type":"ContainerDied","Data":"a994dad9ee702ae7f09b2f2d20f3829fe75ed2d0e6c5a5e9f9644eb3d04682f7"} Jan 21 00:14:05 crc kubenswrapper[5118]: I0121 00:14:05.970139 5118 generic.go:358] "Generic (PLEG): container finished" podID="1968a714-512b-40f9-a302-f8905b0855fd" containerID="8fea837de3e84be13255f5a57032997f075300f5a76a37e708bdcd543664c862" exitCode=0 Jan 21 00:14:05 crc kubenswrapper[5118]: I0121 00:14:05.971130 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-s55xm" event={"ID":"1968a714-512b-40f9-a302-f8905b0855fd","Type":"ContainerDied","Data":"8fea837de3e84be13255f5a57032997f075300f5a76a37e708bdcd543664c862"} Jan 21 00:14:06 crc kubenswrapper[5118]: I0121 00:14:06.108028 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-trbkq" Jan 21 00:14:06 crc kubenswrapper[5118]: I0121 00:14:06.138939 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5c8896fddf-w2m2v"] Jan 21 00:14:06 crc kubenswrapper[5118]: I0121 00:14:06.139808 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c1f2bd9d-a01b-4672-b4b1-f88057b52f08" containerName="controller-manager" Jan 21 00:14:06 crc kubenswrapper[5118]: I0121 00:14:06.139823 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1f2bd9d-a01b-4672-b4b1-f88057b52f08" containerName="controller-manager" Jan 21 00:14:06 crc kubenswrapper[5118]: I0121 00:14:06.139841 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f9b64d95-63f3-4084-bcf2-406ff2c75cee" containerName="installer" Jan 21 00:14:06 crc kubenswrapper[5118]: I0121 00:14:06.139849 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9b64d95-63f3-4084-bcf2-406ff2c75cee" containerName="installer" Jan 21 00:14:06 crc kubenswrapper[5118]: I0121 00:14:06.139867 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ae767afd-59d5-4c04-9ecc-f9ae7b317698" containerName="image-pruner" Jan 21 00:14:06 crc kubenswrapper[5118]: I0121 00:14:06.139876 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae767afd-59d5-4c04-9ecc-f9ae7b317698" containerName="image-pruner" Jan 21 00:14:06 crc kubenswrapper[5118]: I0121 00:14:06.139889 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Jan 21 00:14:06 crc kubenswrapper[5118]: I0121 00:14:06.139896 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Jan 21 00:14:06 crc kubenswrapper[5118]: I0121 00:14:06.140019 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="c1f2bd9d-a01b-4672-b4b1-f88057b52f08" containerName="controller-manager" Jan 21 00:14:06 crc kubenswrapper[5118]: I0121 00:14:06.140033 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Jan 21 00:14:06 crc kubenswrapper[5118]: I0121 00:14:06.140051 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="f9b64d95-63f3-4084-bcf2-406ff2c75cee" containerName="installer" Jan 21 00:14:06 crc kubenswrapper[5118]: I0121 00:14:06.140061 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="ae767afd-59d5-4c04-9ecc-f9ae7b317698" containerName="image-pruner" Jan 21 00:14:06 crc kubenswrapper[5118]: I0121 00:14:06.143631 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5c8896fddf-w2m2v" Jan 21 00:14:06 crc kubenswrapper[5118]: I0121 00:14:06.149024 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-s55xm" Jan 21 00:14:06 crc kubenswrapper[5118]: I0121 00:14:06.157000 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5c8896fddf-w2m2v"] Jan 21 00:14:06 crc kubenswrapper[5118]: I0121 00:14:06.186097 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f84876cb9-hvz2j"] Jan 21 00:14:06 crc kubenswrapper[5118]: I0121 00:14:06.188030 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1968a714-512b-40f9-a302-f8905b0855fd" containerName="route-controller-manager" Jan 21 00:14:06 crc kubenswrapper[5118]: I0121 00:14:06.188056 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="1968a714-512b-40f9-a302-f8905b0855fd" containerName="route-controller-manager" Jan 21 00:14:06 crc kubenswrapper[5118]: I0121 00:14:06.188295 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="1968a714-512b-40f9-a302-f8905b0855fd" containerName="route-controller-manager" Jan 21 00:14:06 crc kubenswrapper[5118]: I0121 00:14:06.193415 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f84876cb9-hvz2j"] Jan 21 00:14:06 crc kubenswrapper[5118]: I0121 00:14:06.193552 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-f84876cb9-hvz2j" Jan 21 00:14:06 crc kubenswrapper[5118]: I0121 00:14:06.202882 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c1f2bd9d-a01b-4672-b4b1-f88057b52f08-proxy-ca-bundles\") pod \"c1f2bd9d-a01b-4672-b4b1-f88057b52f08\" (UID: \"c1f2bd9d-a01b-4672-b4b1-f88057b52f08\") " Jan 21 00:14:06 crc kubenswrapper[5118]: I0121 00:14:06.202972 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c1f2bd9d-a01b-4672-b4b1-f88057b52f08-tmp\") pod \"c1f2bd9d-a01b-4672-b4b1-f88057b52f08\" (UID: \"c1f2bd9d-a01b-4672-b4b1-f88057b52f08\") " Jan 21 00:14:06 crc kubenswrapper[5118]: I0121 00:14:06.203230 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1f2bd9d-a01b-4672-b4b1-f88057b52f08-config\") pod \"c1f2bd9d-a01b-4672-b4b1-f88057b52f08\" (UID: \"c1f2bd9d-a01b-4672-b4b1-f88057b52f08\") " Jan 21 00:14:06 crc kubenswrapper[5118]: I0121 00:14:06.203281 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c5r4q\" (UniqueName: \"kubernetes.io/projected/c1f2bd9d-a01b-4672-b4b1-f88057b52f08-kube-api-access-c5r4q\") pod \"c1f2bd9d-a01b-4672-b4b1-f88057b52f08\" (UID: \"c1f2bd9d-a01b-4672-b4b1-f88057b52f08\") " Jan 21 00:14:06 crc kubenswrapper[5118]: I0121 00:14:06.203348 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1f2bd9d-a01b-4672-b4b1-f88057b52f08-serving-cert\") pod \"c1f2bd9d-a01b-4672-b4b1-f88057b52f08\" (UID: \"c1f2bd9d-a01b-4672-b4b1-f88057b52f08\") " Jan 21 00:14:06 crc kubenswrapper[5118]: I0121 00:14:06.203371 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c1f2bd9d-a01b-4672-b4b1-f88057b52f08-client-ca\") pod \"c1f2bd9d-a01b-4672-b4b1-f88057b52f08\" (UID: \"c1f2bd9d-a01b-4672-b4b1-f88057b52f08\") " Jan 21 00:14:06 crc kubenswrapper[5118]: I0121 00:14:06.203660 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1f2bd9d-a01b-4672-b4b1-f88057b52f08-tmp" (OuterVolumeSpecName: "tmp") pod "c1f2bd9d-a01b-4672-b4b1-f88057b52f08" (UID: "c1f2bd9d-a01b-4672-b4b1-f88057b52f08"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 00:14:06 crc kubenswrapper[5118]: I0121 00:14:06.203805 5118 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c1f2bd9d-a01b-4672-b4b1-f88057b52f08-tmp\") on node \"crc\" DevicePath \"\"" Jan 21 00:14:06 crc kubenswrapper[5118]: I0121 00:14:06.203929 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1f2bd9d-a01b-4672-b4b1-f88057b52f08-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "c1f2bd9d-a01b-4672-b4b1-f88057b52f08" (UID: "c1f2bd9d-a01b-4672-b4b1-f88057b52f08"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 00:14:06 crc kubenswrapper[5118]: I0121 00:14:06.204095 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1f2bd9d-a01b-4672-b4b1-f88057b52f08-config" (OuterVolumeSpecName: "config") pod "c1f2bd9d-a01b-4672-b4b1-f88057b52f08" (UID: "c1f2bd9d-a01b-4672-b4b1-f88057b52f08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 00:14:06 crc kubenswrapper[5118]: I0121 00:14:06.204587 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1f2bd9d-a01b-4672-b4b1-f88057b52f08-client-ca" (OuterVolumeSpecName: "client-ca") pod "c1f2bd9d-a01b-4672-b4b1-f88057b52f08" (UID: "c1f2bd9d-a01b-4672-b4b1-f88057b52f08"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 00:14:06 crc kubenswrapper[5118]: I0121 00:14:06.211318 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1f2bd9d-a01b-4672-b4b1-f88057b52f08-kube-api-access-c5r4q" (OuterVolumeSpecName: "kube-api-access-c5r4q") pod "c1f2bd9d-a01b-4672-b4b1-f88057b52f08" (UID: "c1f2bd9d-a01b-4672-b4b1-f88057b52f08"). InnerVolumeSpecName "kube-api-access-c5r4q". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:14:06 crc kubenswrapper[5118]: I0121 00:14:06.214484 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1f2bd9d-a01b-4672-b4b1-f88057b52f08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c1f2bd9d-a01b-4672-b4b1-f88057b52f08" (UID: "c1f2bd9d-a01b-4672-b4b1-f88057b52f08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 00:14:06 crc kubenswrapper[5118]: I0121 00:14:06.304373 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1968a714-512b-40f9-a302-f8905b0855fd-serving-cert\") pod \"1968a714-512b-40f9-a302-f8905b0855fd\" (UID: \"1968a714-512b-40f9-a302-f8905b0855fd\") " Jan 21 00:14:06 crc kubenswrapper[5118]: I0121 00:14:06.304490 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1968a714-512b-40f9-a302-f8905b0855fd-tmp\") pod \"1968a714-512b-40f9-a302-f8905b0855fd\" (UID: \"1968a714-512b-40f9-a302-f8905b0855fd\") " Jan 21 00:14:06 crc kubenswrapper[5118]: I0121 00:14:06.304533 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jct6m\" (UniqueName: \"kubernetes.io/projected/1968a714-512b-40f9-a302-f8905b0855fd-kube-api-access-jct6m\") pod \"1968a714-512b-40f9-a302-f8905b0855fd\" (UID: \"1968a714-512b-40f9-a302-f8905b0855fd\") " Jan 21 00:14:06 crc kubenswrapper[5118]: I0121 00:14:06.304552 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1968a714-512b-40f9-a302-f8905b0855fd-client-ca\") pod \"1968a714-512b-40f9-a302-f8905b0855fd\" (UID: \"1968a714-512b-40f9-a302-f8905b0855fd\") " Jan 21 00:14:06 crc kubenswrapper[5118]: I0121 00:14:06.304570 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1968a714-512b-40f9-a302-f8905b0855fd-config\") pod \"1968a714-512b-40f9-a302-f8905b0855fd\" (UID: \"1968a714-512b-40f9-a302-f8905b0855fd\") " Jan 21 00:14:06 crc kubenswrapper[5118]: I0121 00:14:06.304685 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7a196653-53d7-403d-972a-b3c1dc8c0cb9-tmp\") pod \"controller-manager-5c8896fddf-w2m2v\" (UID: \"7a196653-53d7-403d-972a-b3c1dc8c0cb9\") " pod="openshift-controller-manager/controller-manager-5c8896fddf-w2m2v" Jan 21 00:14:06 crc kubenswrapper[5118]: I0121 00:14:06.304707 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xq9d4\" (UniqueName: \"kubernetes.io/projected/7a196653-53d7-403d-972a-b3c1dc8c0cb9-kube-api-access-xq9d4\") pod \"controller-manager-5c8896fddf-w2m2v\" (UID: \"7a196653-53d7-403d-972a-b3c1dc8c0cb9\") " pod="openshift-controller-manager/controller-manager-5c8896fddf-w2m2v" Jan 21 00:14:06 crc kubenswrapper[5118]: I0121 00:14:06.304730 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a196653-53d7-403d-972a-b3c1dc8c0cb9-serving-cert\") pod \"controller-manager-5c8896fddf-w2m2v\" (UID: \"7a196653-53d7-403d-972a-b3c1dc8c0cb9\") " pod="openshift-controller-manager/controller-manager-5c8896fddf-w2m2v" Jan 21 00:14:06 crc kubenswrapper[5118]: I0121 00:14:06.304749 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7a196653-53d7-403d-972a-b3c1dc8c0cb9-client-ca\") pod \"controller-manager-5c8896fddf-w2m2v\" (UID: \"7a196653-53d7-403d-972a-b3c1dc8c0cb9\") " pod="openshift-controller-manager/controller-manager-5c8896fddf-w2m2v" Jan 21 00:14:06 crc kubenswrapper[5118]: I0121 00:14:06.304767 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7a196653-53d7-403d-972a-b3c1dc8c0cb9-proxy-ca-bundles\") pod \"controller-manager-5c8896fddf-w2m2v\" (UID: \"7a196653-53d7-403d-972a-b3c1dc8c0cb9\") " pod="openshift-controller-manager/controller-manager-5c8896fddf-w2m2v" Jan 21 00:14:06 crc kubenswrapper[5118]: I0121 00:14:06.304797 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prldv\" (UniqueName: \"kubernetes.io/projected/c7e92bb2-2266-40d8-99a3-c8d004628117-kube-api-access-prldv\") pod \"route-controller-manager-f84876cb9-hvz2j\" (UID: \"c7e92bb2-2266-40d8-99a3-c8d004628117\") " pod="openshift-route-controller-manager/route-controller-manager-f84876cb9-hvz2j" Jan 21 00:14:06 crc kubenswrapper[5118]: I0121 00:14:06.304814 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7e92bb2-2266-40d8-99a3-c8d004628117-config\") pod \"route-controller-manager-f84876cb9-hvz2j\" (UID: \"c7e92bb2-2266-40d8-99a3-c8d004628117\") " pod="openshift-route-controller-manager/route-controller-manager-f84876cb9-hvz2j" Jan 21 00:14:06 crc kubenswrapper[5118]: I0121 00:14:06.304839 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c7e92bb2-2266-40d8-99a3-c8d004628117-client-ca\") pod \"route-controller-manager-f84876cb9-hvz2j\" (UID: \"c7e92bb2-2266-40d8-99a3-c8d004628117\") " pod="openshift-route-controller-manager/route-controller-manager-f84876cb9-hvz2j" Jan 21 00:14:06 crc kubenswrapper[5118]: I0121 00:14:06.304867 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a196653-53d7-403d-972a-b3c1dc8c0cb9-config\") pod \"controller-manager-5c8896fddf-w2m2v\" (UID: \"7a196653-53d7-403d-972a-b3c1dc8c0cb9\") " pod="openshift-controller-manager/controller-manager-5c8896fddf-w2m2v" Jan 21 00:14:06 crc kubenswrapper[5118]: I0121 00:14:06.304884 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c7e92bb2-2266-40d8-99a3-c8d004628117-tmp\") pod \"route-controller-manager-f84876cb9-hvz2j\" (UID: \"c7e92bb2-2266-40d8-99a3-c8d004628117\") " pod="openshift-route-controller-manager/route-controller-manager-f84876cb9-hvz2j" Jan 21 00:14:06 crc kubenswrapper[5118]: I0121 00:14:06.304926 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7e92bb2-2266-40d8-99a3-c8d004628117-serving-cert\") pod \"route-controller-manager-f84876cb9-hvz2j\" (UID: \"c7e92bb2-2266-40d8-99a3-c8d004628117\") " pod="openshift-route-controller-manager/route-controller-manager-f84876cb9-hvz2j" Jan 21 00:14:06 crc kubenswrapper[5118]: I0121 00:14:06.304974 5118 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c1f2bd9d-a01b-4672-b4b1-f88057b52f08-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 00:14:06 crc kubenswrapper[5118]: I0121 00:14:06.304984 5118 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1f2bd9d-a01b-4672-b4b1-f88057b52f08-config\") on node \"crc\" DevicePath \"\"" Jan 21 00:14:06 crc kubenswrapper[5118]: I0121 00:14:06.304993 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-c5r4q\" (UniqueName: \"kubernetes.io/projected/c1f2bd9d-a01b-4672-b4b1-f88057b52f08-kube-api-access-c5r4q\") on node \"crc\" DevicePath \"\"" Jan 21 00:14:06 crc kubenswrapper[5118]: I0121 00:14:06.305002 5118 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1f2bd9d-a01b-4672-b4b1-f88057b52f08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 00:14:06 crc kubenswrapper[5118]: I0121 00:14:06.305368 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1968a714-512b-40f9-a302-f8905b0855fd-tmp" (OuterVolumeSpecName: "tmp") pod "1968a714-512b-40f9-a302-f8905b0855fd" (UID: "1968a714-512b-40f9-a302-f8905b0855fd"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 00:14:06 crc kubenswrapper[5118]: I0121 00:14:06.305448 5118 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c1f2bd9d-a01b-4672-b4b1-f88057b52f08-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 00:14:06 crc kubenswrapper[5118]: I0121 00:14:06.305745 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1968a714-512b-40f9-a302-f8905b0855fd-config" (OuterVolumeSpecName: "config") pod "1968a714-512b-40f9-a302-f8905b0855fd" (UID: "1968a714-512b-40f9-a302-f8905b0855fd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 00:14:06 crc kubenswrapper[5118]: I0121 00:14:06.306262 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1968a714-512b-40f9-a302-f8905b0855fd-client-ca" (OuterVolumeSpecName: "client-ca") pod "1968a714-512b-40f9-a302-f8905b0855fd" (UID: "1968a714-512b-40f9-a302-f8905b0855fd"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 00:14:06 crc kubenswrapper[5118]: I0121 00:14:06.309659 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1968a714-512b-40f9-a302-f8905b0855fd-kube-api-access-jct6m" (OuterVolumeSpecName: "kube-api-access-jct6m") pod "1968a714-512b-40f9-a302-f8905b0855fd" (UID: "1968a714-512b-40f9-a302-f8905b0855fd"). InnerVolumeSpecName "kube-api-access-jct6m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:14:06 crc kubenswrapper[5118]: I0121 00:14:06.309910 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1968a714-512b-40f9-a302-f8905b0855fd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1968a714-512b-40f9-a302-f8905b0855fd" (UID: "1968a714-512b-40f9-a302-f8905b0855fd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 00:14:06 crc kubenswrapper[5118]: I0121 00:14:06.407132 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c7e92bb2-2266-40d8-99a3-c8d004628117-tmp\") pod \"route-controller-manager-f84876cb9-hvz2j\" (UID: \"c7e92bb2-2266-40d8-99a3-c8d004628117\") " pod="openshift-route-controller-manager/route-controller-manager-f84876cb9-hvz2j" Jan 21 00:14:06 crc kubenswrapper[5118]: I0121 00:14:06.407319 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7e92bb2-2266-40d8-99a3-c8d004628117-serving-cert\") pod \"route-controller-manager-f84876cb9-hvz2j\" (UID: \"c7e92bb2-2266-40d8-99a3-c8d004628117\") " pod="openshift-route-controller-manager/route-controller-manager-f84876cb9-hvz2j" Jan 21 00:14:06 crc kubenswrapper[5118]: I0121 00:14:06.407384 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7a196653-53d7-403d-972a-b3c1dc8c0cb9-tmp\") pod \"controller-manager-5c8896fddf-w2m2v\" (UID: \"7a196653-53d7-403d-972a-b3c1dc8c0cb9\") " pod="openshift-controller-manager/controller-manager-5c8896fddf-w2m2v" Jan 21 00:14:06 crc kubenswrapper[5118]: I0121 00:14:06.407429 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xq9d4\" (UniqueName: \"kubernetes.io/projected/7a196653-53d7-403d-972a-b3c1dc8c0cb9-kube-api-access-xq9d4\") pod \"controller-manager-5c8896fddf-w2m2v\" (UID: \"7a196653-53d7-403d-972a-b3c1dc8c0cb9\") " pod="openshift-controller-manager/controller-manager-5c8896fddf-w2m2v" Jan 21 00:14:06 crc kubenswrapper[5118]: I0121 00:14:06.407601 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a196653-53d7-403d-972a-b3c1dc8c0cb9-serving-cert\") pod \"controller-manager-5c8896fddf-w2m2v\" (UID: \"7a196653-53d7-403d-972a-b3c1dc8c0cb9\") " pod="openshift-controller-manager/controller-manager-5c8896fddf-w2m2v" Jan 21 00:14:06 crc kubenswrapper[5118]: I0121 00:14:06.407649 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7a196653-53d7-403d-972a-b3c1dc8c0cb9-client-ca\") pod \"controller-manager-5c8896fddf-w2m2v\" (UID: \"7a196653-53d7-403d-972a-b3c1dc8c0cb9\") " pod="openshift-controller-manager/controller-manager-5c8896fddf-w2m2v" Jan 21 00:14:06 crc kubenswrapper[5118]: I0121 00:14:06.407704 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7a196653-53d7-403d-972a-b3c1dc8c0cb9-proxy-ca-bundles\") pod \"controller-manager-5c8896fddf-w2m2v\" (UID: \"7a196653-53d7-403d-972a-b3c1dc8c0cb9\") " pod="openshift-controller-manager/controller-manager-5c8896fddf-w2m2v" Jan 21 00:14:06 crc kubenswrapper[5118]: I0121 00:14:06.408008 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7a196653-53d7-403d-972a-b3c1dc8c0cb9-tmp\") pod \"controller-manager-5c8896fddf-w2m2v\" (UID: \"7a196653-53d7-403d-972a-b3c1dc8c0cb9\") " pod="openshift-controller-manager/controller-manager-5c8896fddf-w2m2v" Jan 21 00:14:06 crc kubenswrapper[5118]: I0121 00:14:06.408055 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-prldv\" (UniqueName: \"kubernetes.io/projected/c7e92bb2-2266-40d8-99a3-c8d004628117-kube-api-access-prldv\") pod \"route-controller-manager-f84876cb9-hvz2j\" (UID: \"c7e92bb2-2266-40d8-99a3-c8d004628117\") " pod="openshift-route-controller-manager/route-controller-manager-f84876cb9-hvz2j" Jan 21 00:14:06 crc kubenswrapper[5118]: I0121 00:14:06.408135 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7e92bb2-2266-40d8-99a3-c8d004628117-config\") pod \"route-controller-manager-f84876cb9-hvz2j\" (UID: \"c7e92bb2-2266-40d8-99a3-c8d004628117\") " pod="openshift-route-controller-manager/route-controller-manager-f84876cb9-hvz2j" Jan 21 00:14:06 crc kubenswrapper[5118]: I0121 00:14:06.408294 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c7e92bb2-2266-40d8-99a3-c8d004628117-client-ca\") pod \"route-controller-manager-f84876cb9-hvz2j\" (UID: \"c7e92bb2-2266-40d8-99a3-c8d004628117\") " pod="openshift-route-controller-manager/route-controller-manager-f84876cb9-hvz2j" Jan 21 00:14:06 crc kubenswrapper[5118]: I0121 00:14:06.408813 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c7e92bb2-2266-40d8-99a3-c8d004628117-tmp\") pod \"route-controller-manager-f84876cb9-hvz2j\" (UID: \"c7e92bb2-2266-40d8-99a3-c8d004628117\") " pod="openshift-route-controller-manager/route-controller-manager-f84876cb9-hvz2j" Jan 21 00:14:06 crc kubenswrapper[5118]: I0121 00:14:06.409306 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7e92bb2-2266-40d8-99a3-c8d004628117-config\") pod \"route-controller-manager-f84876cb9-hvz2j\" (UID: \"c7e92bb2-2266-40d8-99a3-c8d004628117\") " pod="openshift-route-controller-manager/route-controller-manager-f84876cb9-hvz2j" Jan 21 00:14:06 crc kubenswrapper[5118]: I0121 00:14:06.409492 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c7e92bb2-2266-40d8-99a3-c8d004628117-client-ca\") pod \"route-controller-manager-f84876cb9-hvz2j\" (UID: \"c7e92bb2-2266-40d8-99a3-c8d004628117\") " pod="openshift-route-controller-manager/route-controller-manager-f84876cb9-hvz2j" Jan 21 00:14:06 crc kubenswrapper[5118]: I0121 00:14:06.409538 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a196653-53d7-403d-972a-b3c1dc8c0cb9-config\") pod \"controller-manager-5c8896fddf-w2m2v\" (UID: \"7a196653-53d7-403d-972a-b3c1dc8c0cb9\") " pod="openshift-controller-manager/controller-manager-5c8896fddf-w2m2v" Jan 21 00:14:06 crc kubenswrapper[5118]: I0121 00:14:06.409646 5118 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1968a714-512b-40f9-a302-f8905b0855fd-config\") on node \"crc\" DevicePath \"\"" Jan 21 00:14:06 crc kubenswrapper[5118]: I0121 00:14:06.409664 5118 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1968a714-512b-40f9-a302-f8905b0855fd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 00:14:06 crc kubenswrapper[5118]: I0121 00:14:06.409675 5118 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1968a714-512b-40f9-a302-f8905b0855fd-tmp\") on node \"crc\" DevicePath \"\"" Jan 21 00:14:06 crc kubenswrapper[5118]: I0121 00:14:06.409686 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jct6m\" (UniqueName: \"kubernetes.io/projected/1968a714-512b-40f9-a302-f8905b0855fd-kube-api-access-jct6m\") on node \"crc\" DevicePath \"\"" Jan 21 00:14:06 crc kubenswrapper[5118]: I0121 00:14:06.409699 5118 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1968a714-512b-40f9-a302-f8905b0855fd-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 00:14:06 crc kubenswrapper[5118]: I0121 00:14:06.410979 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7a196653-53d7-403d-972a-b3c1dc8c0cb9-proxy-ca-bundles\") pod \"controller-manager-5c8896fddf-w2m2v\" (UID: \"7a196653-53d7-403d-972a-b3c1dc8c0cb9\") " pod="openshift-controller-manager/controller-manager-5c8896fddf-w2m2v" Jan 21 00:14:06 crc kubenswrapper[5118]: I0121 00:14:06.411618 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7e92bb2-2266-40d8-99a3-c8d004628117-serving-cert\") pod \"route-controller-manager-f84876cb9-hvz2j\" (UID: \"c7e92bb2-2266-40d8-99a3-c8d004628117\") " pod="openshift-route-controller-manager/route-controller-manager-f84876cb9-hvz2j" Jan 21 00:14:06 crc kubenswrapper[5118]: I0121 00:14:06.412959 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7a196653-53d7-403d-972a-b3c1dc8c0cb9-client-ca\") pod \"controller-manager-5c8896fddf-w2m2v\" (UID: \"7a196653-53d7-403d-972a-b3c1dc8c0cb9\") " pod="openshift-controller-manager/controller-manager-5c8896fddf-w2m2v" Jan 21 00:14:06 crc kubenswrapper[5118]: I0121 00:14:06.413984 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a196653-53d7-403d-972a-b3c1dc8c0cb9-config\") pod \"controller-manager-5c8896fddf-w2m2v\" (UID: \"7a196653-53d7-403d-972a-b3c1dc8c0cb9\") " pod="openshift-controller-manager/controller-manager-5c8896fddf-w2m2v" Jan 21 00:14:06 crc kubenswrapper[5118]: I0121 00:14:06.418352 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a196653-53d7-403d-972a-b3c1dc8c0cb9-serving-cert\") pod \"controller-manager-5c8896fddf-w2m2v\" (UID: \"7a196653-53d7-403d-972a-b3c1dc8c0cb9\") " pod="openshift-controller-manager/controller-manager-5c8896fddf-w2m2v" Jan 21 00:14:06 crc kubenswrapper[5118]: I0121 00:14:06.426662 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xq9d4\" (UniqueName: \"kubernetes.io/projected/7a196653-53d7-403d-972a-b3c1dc8c0cb9-kube-api-access-xq9d4\") pod \"controller-manager-5c8896fddf-w2m2v\" (UID: \"7a196653-53d7-403d-972a-b3c1dc8c0cb9\") " pod="openshift-controller-manager/controller-manager-5c8896fddf-w2m2v" Jan 21 00:14:06 crc kubenswrapper[5118]: I0121 00:14:06.429970 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-prldv\" (UniqueName: \"kubernetes.io/projected/c7e92bb2-2266-40d8-99a3-c8d004628117-kube-api-access-prldv\") pod \"route-controller-manager-f84876cb9-hvz2j\" (UID: \"c7e92bb2-2266-40d8-99a3-c8d004628117\") " pod="openshift-route-controller-manager/route-controller-manager-f84876cb9-hvz2j" Jan 21 00:14:06 crc kubenswrapper[5118]: I0121 00:14:06.460065 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5c8896fddf-w2m2v" Jan 21 00:14:06 crc kubenswrapper[5118]: I0121 00:14:06.509148 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-f84876cb9-hvz2j" Jan 21 00:14:06 crc kubenswrapper[5118]: I0121 00:14:06.699568 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f84876cb9-hvz2j"] Jan 21 00:14:06 crc kubenswrapper[5118]: I0121 00:14:06.706054 5118 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 00:14:06 crc kubenswrapper[5118]: I0121 00:14:06.900499 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5c8896fddf-w2m2v"] Jan 21 00:14:06 crc kubenswrapper[5118]: I0121 00:14:06.977464 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-s55xm" Jan 21 00:14:06 crc kubenswrapper[5118]: I0121 00:14:06.982402 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-f84876cb9-hvz2j" Jan 21 00:14:06 crc kubenswrapper[5118]: I0121 00:14:06.982433 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-s55xm" event={"ID":"1968a714-512b-40f9-a302-f8905b0855fd","Type":"ContainerDied","Data":"9d98f30b316815dcb2ab563446ddac268f764a905a51f5225672c8ffd41210db"} Jan 21 00:14:06 crc kubenswrapper[5118]: I0121 00:14:06.982458 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-f84876cb9-hvz2j" event={"ID":"c7e92bb2-2266-40d8-99a3-c8d004628117","Type":"ContainerStarted","Data":"c8ec37321d48e4dabd68313cf856b2c0746a59a2dd65a88baa4ea6b91b59dced"} Jan 21 00:14:06 crc kubenswrapper[5118]: I0121 00:14:06.982468 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-f84876cb9-hvz2j" event={"ID":"c7e92bb2-2266-40d8-99a3-c8d004628117","Type":"ContainerStarted","Data":"082341654c80228d604ad206f1369533f93b460d9e88bb807e43434cf0d24820"} Jan 21 00:14:06 crc kubenswrapper[5118]: I0121 00:14:06.982476 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5c8896fddf-w2m2v" event={"ID":"7a196653-53d7-403d-972a-b3c1dc8c0cb9","Type":"ContainerStarted","Data":"3a9a0b9360ce52cd168445ae2e19698d95775b1d283b99132481a76536676f8a"} Jan 21 00:14:06 crc kubenswrapper[5118]: I0121 00:14:06.982494 5118 scope.go:117] "RemoveContainer" containerID="8fea837de3e84be13255f5a57032997f075300f5a76a37e708bdcd543664c862" Jan 21 00:14:06 crc kubenswrapper[5118]: I0121 00:14:06.984668 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-trbkq" event={"ID":"c1f2bd9d-a01b-4672-b4b1-f88057b52f08","Type":"ContainerDied","Data":"d8162e1dd50bbad0a7190bf92c4e6bafd215436bd2edeeebefcbed69bfebc413"} Jan 21 00:14:06 crc kubenswrapper[5118]: I0121 00:14:06.984787 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-trbkq" Jan 21 00:14:06 crc kubenswrapper[5118]: I0121 00:14:06.999239 5118 scope.go:117] "RemoveContainer" containerID="a994dad9ee702ae7f09b2f2d20f3829fe75ed2d0e6c5a5e9f9644eb3d04682f7" Jan 21 00:14:07 crc kubenswrapper[5118]: I0121 00:14:07.005583 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-f84876cb9-hvz2j" podStartSLOduration=2.005559975 podStartE2EDuration="2.005559975s" podCreationTimestamp="2026-01-21 00:14:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:14:06.997808518 +0000 UTC m=+302.322055566" watchObservedRunningTime="2026-01-21 00:14:07.005559975 +0000 UTC m=+302.329806993" Jan 21 00:14:07 crc kubenswrapper[5118]: I0121 00:14:07.023543 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-s55xm"] Jan 21 00:14:07 crc kubenswrapper[5118]: I0121 00:14:07.032220 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-s55xm"] Jan 21 00:14:07 crc kubenswrapper[5118]: I0121 00:14:07.037941 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-trbkq"] Jan 21 00:14:07 crc kubenswrapper[5118]: I0121 00:14:07.041950 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-trbkq"] Jan 21 00:14:07 crc kubenswrapper[5118]: I0121 00:14:07.408632 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-f84876cb9-hvz2j" Jan 21 00:14:07 crc kubenswrapper[5118]: I0121 00:14:07.994283 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5c8896fddf-w2m2v" event={"ID":"7a196653-53d7-403d-972a-b3c1dc8c0cb9","Type":"ContainerStarted","Data":"a4bc56fb620a2ac754e629711a463493a84b604927e6f19f38e3eb4c2bb4c1a3"} Jan 21 00:14:07 crc kubenswrapper[5118]: I0121 00:14:07.994817 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-5c8896fddf-w2m2v" Jan 21 00:14:08 crc kubenswrapper[5118]: I0121 00:14:08.000379 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5c8896fddf-w2m2v" Jan 21 00:14:08 crc kubenswrapper[5118]: I0121 00:14:08.023607 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5c8896fddf-w2m2v" podStartSLOduration=3.023583794 podStartE2EDuration="3.023583794s" podCreationTimestamp="2026-01-21 00:14:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:14:08.018486328 +0000 UTC m=+303.342733386" watchObservedRunningTime="2026-01-21 00:14:08.023583794 +0000 UTC m=+303.347830832" Jan 21 00:14:08 crc kubenswrapper[5118]: I0121 00:14:08.182536 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5c8896fddf-w2m2v"] Jan 21 00:14:08 crc kubenswrapper[5118]: I0121 00:14:08.210543 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f84876cb9-hvz2j"] Jan 21 00:14:08 crc kubenswrapper[5118]: I0121 00:14:08.986882 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1968a714-512b-40f9-a302-f8905b0855fd" path="/var/lib/kubelet/pods/1968a714-512b-40f9-a302-f8905b0855fd/volumes" Jan 21 00:14:08 crc kubenswrapper[5118]: I0121 00:14:08.988907 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1f2bd9d-a01b-4672-b4b1-f88057b52f08" path="/var/lib/kubelet/pods/c1f2bd9d-a01b-4672-b4b1-f88057b52f08/volumes" Jan 21 00:14:10 crc kubenswrapper[5118]: I0121 00:14:10.008844 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5c8896fddf-w2m2v" podUID="7a196653-53d7-403d-972a-b3c1dc8c0cb9" containerName="controller-manager" containerID="cri-o://a4bc56fb620a2ac754e629711a463493a84b604927e6f19f38e3eb4c2bb4c1a3" gracePeriod=30 Jan 21 00:14:10 crc kubenswrapper[5118]: I0121 00:14:10.009189 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-f84876cb9-hvz2j" podUID="c7e92bb2-2266-40d8-99a3-c8d004628117" containerName="route-controller-manager" containerID="cri-o://c8ec37321d48e4dabd68313cf856b2c0746a59a2dd65a88baa4ea6b91b59dced" gracePeriod=30 Jan 21 00:14:10 crc kubenswrapper[5118]: I0121 00:14:10.389180 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-f84876cb9-hvz2j" Jan 21 00:14:10 crc kubenswrapper[5118]: I0121 00:14:10.395809 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5c8896fddf-w2m2v" Jan 21 00:14:10 crc kubenswrapper[5118]: I0121 00:14:10.426782 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84c66bb6b6-hx672"] Jan 21 00:14:10 crc kubenswrapper[5118]: I0121 00:14:10.427567 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c7e92bb2-2266-40d8-99a3-c8d004628117" containerName="route-controller-manager" Jan 21 00:14:10 crc kubenswrapper[5118]: I0121 00:14:10.427586 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7e92bb2-2266-40d8-99a3-c8d004628117" containerName="route-controller-manager" Jan 21 00:14:10 crc kubenswrapper[5118]: I0121 00:14:10.427598 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7a196653-53d7-403d-972a-b3c1dc8c0cb9" containerName="controller-manager" Jan 21 00:14:10 crc kubenswrapper[5118]: I0121 00:14:10.427605 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a196653-53d7-403d-972a-b3c1dc8c0cb9" containerName="controller-manager" Jan 21 00:14:10 crc kubenswrapper[5118]: I0121 00:14:10.427720 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="c7e92bb2-2266-40d8-99a3-c8d004628117" containerName="route-controller-manager" Jan 21 00:14:10 crc kubenswrapper[5118]: I0121 00:14:10.427730 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="7a196653-53d7-403d-972a-b3c1dc8c0cb9" containerName="controller-manager" Jan 21 00:14:10 crc kubenswrapper[5118]: I0121 00:14:10.433431 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84c66bb6b6-hx672"] Jan 21 00:14:10 crc kubenswrapper[5118]: I0121 00:14:10.433571 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-84c66bb6b6-hx672" Jan 21 00:14:10 crc kubenswrapper[5118]: I0121 00:14:10.457118 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-588787f94b-rwxsm"] Jan 21 00:14:10 crc kubenswrapper[5118]: I0121 00:14:10.460495 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-588787f94b-rwxsm" Jan 21 00:14:10 crc kubenswrapper[5118]: I0121 00:14:10.460860 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-588787f94b-rwxsm"] Jan 21 00:14:10 crc kubenswrapper[5118]: I0121 00:14:10.461278 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7a196653-53d7-403d-972a-b3c1dc8c0cb9-tmp\") pod \"7a196653-53d7-403d-972a-b3c1dc8c0cb9\" (UID: \"7a196653-53d7-403d-972a-b3c1dc8c0cb9\") " Jan 21 00:14:10 crc kubenswrapper[5118]: I0121 00:14:10.461323 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7e92bb2-2266-40d8-99a3-c8d004628117-config\") pod \"c7e92bb2-2266-40d8-99a3-c8d004628117\" (UID: \"c7e92bb2-2266-40d8-99a3-c8d004628117\") " Jan 21 00:14:10 crc kubenswrapper[5118]: I0121 00:14:10.461354 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7a196653-53d7-403d-972a-b3c1dc8c0cb9-client-ca\") pod \"7a196653-53d7-403d-972a-b3c1dc8c0cb9\" (UID: \"7a196653-53d7-403d-972a-b3c1dc8c0cb9\") " Jan 21 00:14:10 crc kubenswrapper[5118]: I0121 00:14:10.461377 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c7e92bb2-2266-40d8-99a3-c8d004628117-tmp\") pod \"c7e92bb2-2266-40d8-99a3-c8d004628117\" (UID: \"c7e92bb2-2266-40d8-99a3-c8d004628117\") " Jan 21 00:14:10 crc kubenswrapper[5118]: I0121 00:14:10.461414 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7a196653-53d7-403d-972a-b3c1dc8c0cb9-proxy-ca-bundles\") pod \"7a196653-53d7-403d-972a-b3c1dc8c0cb9\" (UID: \"7a196653-53d7-403d-972a-b3c1dc8c0cb9\") " Jan 21 00:14:10 crc kubenswrapper[5118]: I0121 00:14:10.461444 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c7e92bb2-2266-40d8-99a3-c8d004628117-client-ca\") pod \"c7e92bb2-2266-40d8-99a3-c8d004628117\" (UID: \"c7e92bb2-2266-40d8-99a3-c8d004628117\") " Jan 21 00:14:10 crc kubenswrapper[5118]: I0121 00:14:10.461477 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a196653-53d7-403d-972a-b3c1dc8c0cb9-serving-cert\") pod \"7a196653-53d7-403d-972a-b3c1dc8c0cb9\" (UID: \"7a196653-53d7-403d-972a-b3c1dc8c0cb9\") " Jan 21 00:14:10 crc kubenswrapper[5118]: I0121 00:14:10.461507 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7e92bb2-2266-40d8-99a3-c8d004628117-serving-cert\") pod \"c7e92bb2-2266-40d8-99a3-c8d004628117\" (UID: \"c7e92bb2-2266-40d8-99a3-c8d004628117\") " Jan 21 00:14:10 crc kubenswrapper[5118]: I0121 00:14:10.461563 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xq9d4\" (UniqueName: \"kubernetes.io/projected/7a196653-53d7-403d-972a-b3c1dc8c0cb9-kube-api-access-xq9d4\") pod \"7a196653-53d7-403d-972a-b3c1dc8c0cb9\" (UID: \"7a196653-53d7-403d-972a-b3c1dc8c0cb9\") " Jan 21 00:14:10 crc kubenswrapper[5118]: I0121 00:14:10.461621 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a196653-53d7-403d-972a-b3c1dc8c0cb9-config\") pod \"7a196653-53d7-403d-972a-b3c1dc8c0cb9\" (UID: \"7a196653-53d7-403d-972a-b3c1dc8c0cb9\") " Jan 21 00:14:10 crc kubenswrapper[5118]: I0121 00:14:10.461652 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-prldv\" (UniqueName: \"kubernetes.io/projected/c7e92bb2-2266-40d8-99a3-c8d004628117-kube-api-access-prldv\") pod \"c7e92bb2-2266-40d8-99a3-c8d004628117\" (UID: \"c7e92bb2-2266-40d8-99a3-c8d004628117\") " Jan 21 00:14:10 crc kubenswrapper[5118]: I0121 00:14:10.461743 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0c195d86-c61b-4cdd-be28-bc64d7f39297-serving-cert\") pod \"route-controller-manager-84c66bb6b6-hx672\" (UID: \"0c195d86-c61b-4cdd-be28-bc64d7f39297\") " pod="openshift-route-controller-manager/route-controller-manager-84c66bb6b6-hx672" Jan 21 00:14:10 crc kubenswrapper[5118]: I0121 00:14:10.461795 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0c195d86-c61b-4cdd-be28-bc64d7f39297-client-ca\") pod \"route-controller-manager-84c66bb6b6-hx672\" (UID: \"0c195d86-c61b-4cdd-be28-bc64d7f39297\") " pod="openshift-route-controller-manager/route-controller-manager-84c66bb6b6-hx672" Jan 21 00:14:10 crc kubenswrapper[5118]: I0121 00:14:10.462800 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a196653-53d7-403d-972a-b3c1dc8c0cb9-tmp" (OuterVolumeSpecName: "tmp") pod "7a196653-53d7-403d-972a-b3c1dc8c0cb9" (UID: "7a196653-53d7-403d-972a-b3c1dc8c0cb9"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 00:14:10 crc kubenswrapper[5118]: I0121 00:14:10.463014 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7e92bb2-2266-40d8-99a3-c8d004628117-tmp" (OuterVolumeSpecName: "tmp") pod "c7e92bb2-2266-40d8-99a3-c8d004628117" (UID: "c7e92bb2-2266-40d8-99a3-c8d004628117"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 00:14:10 crc kubenswrapper[5118]: I0121 00:14:10.463617 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7e92bb2-2266-40d8-99a3-c8d004628117-config" (OuterVolumeSpecName: "config") pod "c7e92bb2-2266-40d8-99a3-c8d004628117" (UID: "c7e92bb2-2266-40d8-99a3-c8d004628117"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 00:14:10 crc kubenswrapper[5118]: I0121 00:14:10.463657 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a196653-53d7-403d-972a-b3c1dc8c0cb9-config" (OuterVolumeSpecName: "config") pod "7a196653-53d7-403d-972a-b3c1dc8c0cb9" (UID: "7a196653-53d7-403d-972a-b3c1dc8c0cb9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 00:14:10 crc kubenswrapper[5118]: I0121 00:14:10.464499 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a196653-53d7-403d-972a-b3c1dc8c0cb9-client-ca" (OuterVolumeSpecName: "client-ca") pod "7a196653-53d7-403d-972a-b3c1dc8c0cb9" (UID: "7a196653-53d7-403d-972a-b3c1dc8c0cb9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 00:14:10 crc kubenswrapper[5118]: I0121 00:14:10.465226 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgk8p\" (UniqueName: \"kubernetes.io/projected/0c195d86-c61b-4cdd-be28-bc64d7f39297-kube-api-access-tgk8p\") pod \"route-controller-manager-84c66bb6b6-hx672\" (UID: \"0c195d86-c61b-4cdd-be28-bc64d7f39297\") " pod="openshift-route-controller-manager/route-controller-manager-84c66bb6b6-hx672" Jan 21 00:14:10 crc kubenswrapper[5118]: I0121 00:14:10.465292 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7e92bb2-2266-40d8-99a3-c8d004628117-client-ca" (OuterVolumeSpecName: "client-ca") pod "c7e92bb2-2266-40d8-99a3-c8d004628117" (UID: "c7e92bb2-2266-40d8-99a3-c8d004628117"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 00:14:10 crc kubenswrapper[5118]: I0121 00:14:10.465391 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c195d86-c61b-4cdd-be28-bc64d7f39297-config\") pod \"route-controller-manager-84c66bb6b6-hx672\" (UID: \"0c195d86-c61b-4cdd-be28-bc64d7f39297\") " pod="openshift-route-controller-manager/route-controller-manager-84c66bb6b6-hx672" Jan 21 00:14:10 crc kubenswrapper[5118]: I0121 00:14:10.465399 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a196653-53d7-403d-972a-b3c1dc8c0cb9-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7a196653-53d7-403d-972a-b3c1dc8c0cb9" (UID: "7a196653-53d7-403d-972a-b3c1dc8c0cb9"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 00:14:10 crc kubenswrapper[5118]: I0121 00:14:10.465442 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0c195d86-c61b-4cdd-be28-bc64d7f39297-tmp\") pod \"route-controller-manager-84c66bb6b6-hx672\" (UID: \"0c195d86-c61b-4cdd-be28-bc64d7f39297\") " pod="openshift-route-controller-manager/route-controller-manager-84c66bb6b6-hx672" Jan 21 00:14:10 crc kubenswrapper[5118]: I0121 00:14:10.465585 5118 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a196653-53d7-403d-972a-b3c1dc8c0cb9-config\") on node \"crc\" DevicePath \"\"" Jan 21 00:14:10 crc kubenswrapper[5118]: I0121 00:14:10.465604 5118 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7a196653-53d7-403d-972a-b3c1dc8c0cb9-tmp\") on node \"crc\" DevicePath \"\"" Jan 21 00:14:10 crc kubenswrapper[5118]: I0121 00:14:10.465612 5118 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7e92bb2-2266-40d8-99a3-c8d004628117-config\") on node \"crc\" DevicePath \"\"" Jan 21 00:14:10 crc kubenswrapper[5118]: I0121 00:14:10.465621 5118 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7a196653-53d7-403d-972a-b3c1dc8c0cb9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 00:14:10 crc kubenswrapper[5118]: I0121 00:14:10.465629 5118 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c7e92bb2-2266-40d8-99a3-c8d004628117-tmp\") on node \"crc\" DevicePath \"\"" Jan 21 00:14:10 crc kubenswrapper[5118]: I0121 00:14:10.465637 5118 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7a196653-53d7-403d-972a-b3c1dc8c0cb9-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 00:14:10 crc kubenswrapper[5118]: I0121 00:14:10.465645 5118 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c7e92bb2-2266-40d8-99a3-c8d004628117-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 00:14:10 crc kubenswrapper[5118]: I0121 00:14:10.472094 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a196653-53d7-403d-972a-b3c1dc8c0cb9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7a196653-53d7-403d-972a-b3c1dc8c0cb9" (UID: "7a196653-53d7-403d-972a-b3c1dc8c0cb9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 00:14:10 crc kubenswrapper[5118]: I0121 00:14:10.478264 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7e92bb2-2266-40d8-99a3-c8d004628117-kube-api-access-prldv" (OuterVolumeSpecName: "kube-api-access-prldv") pod "c7e92bb2-2266-40d8-99a3-c8d004628117" (UID: "c7e92bb2-2266-40d8-99a3-c8d004628117"). InnerVolumeSpecName "kube-api-access-prldv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:14:10 crc kubenswrapper[5118]: I0121 00:14:10.478339 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a196653-53d7-403d-972a-b3c1dc8c0cb9-kube-api-access-xq9d4" (OuterVolumeSpecName: "kube-api-access-xq9d4") pod "7a196653-53d7-403d-972a-b3c1dc8c0cb9" (UID: "7a196653-53d7-403d-972a-b3c1dc8c0cb9"). InnerVolumeSpecName "kube-api-access-xq9d4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:14:10 crc kubenswrapper[5118]: I0121 00:14:10.478437 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7e92bb2-2266-40d8-99a3-c8d004628117-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c7e92bb2-2266-40d8-99a3-c8d004628117" (UID: "c7e92bb2-2266-40d8-99a3-c8d004628117"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 00:14:10 crc kubenswrapper[5118]: I0121 00:14:10.566627 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0c195d86-c61b-4cdd-be28-bc64d7f39297-serving-cert\") pod \"route-controller-manager-84c66bb6b6-hx672\" (UID: \"0c195d86-c61b-4cdd-be28-bc64d7f39297\") " pod="openshift-route-controller-manager/route-controller-manager-84c66bb6b6-hx672" Jan 21 00:14:10 crc kubenswrapper[5118]: I0121 00:14:10.566757 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0c195d86-c61b-4cdd-be28-bc64d7f39297-client-ca\") pod \"route-controller-manager-84c66bb6b6-hx672\" (UID: \"0c195d86-c61b-4cdd-be28-bc64d7f39297\") " pod="openshift-route-controller-manager/route-controller-manager-84c66bb6b6-hx672" Jan 21 00:14:10 crc kubenswrapper[5118]: I0121 00:14:10.566806 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tgk8p\" (UniqueName: \"kubernetes.io/projected/0c195d86-c61b-4cdd-be28-bc64d7f39297-kube-api-access-tgk8p\") pod \"route-controller-manager-84c66bb6b6-hx672\" (UID: \"0c195d86-c61b-4cdd-be28-bc64d7f39297\") " pod="openshift-route-controller-manager/route-controller-manager-84c66bb6b6-hx672" Jan 21 00:14:10 crc kubenswrapper[5118]: I0121 00:14:10.566863 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b53735f9-cfc6-40e8-9ed0-98e07b6f60e6-tmp\") pod \"controller-manager-588787f94b-rwxsm\" (UID: \"b53735f9-cfc6-40e8-9ed0-98e07b6f60e6\") " pod="openshift-controller-manager/controller-manager-588787f94b-rwxsm" Jan 21 00:14:10 crc kubenswrapper[5118]: I0121 00:14:10.566911 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pch4v\" (UniqueName: \"kubernetes.io/projected/b53735f9-cfc6-40e8-9ed0-98e07b6f60e6-kube-api-access-pch4v\") pod \"controller-manager-588787f94b-rwxsm\" (UID: \"b53735f9-cfc6-40e8-9ed0-98e07b6f60e6\") " pod="openshift-controller-manager/controller-manager-588787f94b-rwxsm" Jan 21 00:14:10 crc kubenswrapper[5118]: I0121 00:14:10.566952 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c195d86-c61b-4cdd-be28-bc64d7f39297-config\") pod \"route-controller-manager-84c66bb6b6-hx672\" (UID: \"0c195d86-c61b-4cdd-be28-bc64d7f39297\") " pod="openshift-route-controller-manager/route-controller-manager-84c66bb6b6-hx672" Jan 21 00:14:10 crc kubenswrapper[5118]: I0121 00:14:10.566985 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b53735f9-cfc6-40e8-9ed0-98e07b6f60e6-client-ca\") pod \"controller-manager-588787f94b-rwxsm\" (UID: \"b53735f9-cfc6-40e8-9ed0-98e07b6f60e6\") " pod="openshift-controller-manager/controller-manager-588787f94b-rwxsm" Jan 21 00:14:10 crc kubenswrapper[5118]: I0121 00:14:10.567013 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b53735f9-cfc6-40e8-9ed0-98e07b6f60e6-proxy-ca-bundles\") pod \"controller-manager-588787f94b-rwxsm\" (UID: \"b53735f9-cfc6-40e8-9ed0-98e07b6f60e6\") " pod="openshift-controller-manager/controller-manager-588787f94b-rwxsm" Jan 21 00:14:10 crc kubenswrapper[5118]: I0121 00:14:10.567044 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b53735f9-cfc6-40e8-9ed0-98e07b6f60e6-serving-cert\") pod \"controller-manager-588787f94b-rwxsm\" (UID: \"b53735f9-cfc6-40e8-9ed0-98e07b6f60e6\") " pod="openshift-controller-manager/controller-manager-588787f94b-rwxsm" Jan 21 00:14:10 crc kubenswrapper[5118]: I0121 00:14:10.567079 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0c195d86-c61b-4cdd-be28-bc64d7f39297-tmp\") pod \"route-controller-manager-84c66bb6b6-hx672\" (UID: \"0c195d86-c61b-4cdd-be28-bc64d7f39297\") " pod="openshift-route-controller-manager/route-controller-manager-84c66bb6b6-hx672" Jan 21 00:14:10 crc kubenswrapper[5118]: I0121 00:14:10.567148 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b53735f9-cfc6-40e8-9ed0-98e07b6f60e6-config\") pod \"controller-manager-588787f94b-rwxsm\" (UID: \"b53735f9-cfc6-40e8-9ed0-98e07b6f60e6\") " pod="openshift-controller-manager/controller-manager-588787f94b-rwxsm" Jan 21 00:14:10 crc kubenswrapper[5118]: I0121 00:14:10.567269 5118 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a196653-53d7-403d-972a-b3c1dc8c0cb9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 00:14:10 crc kubenswrapper[5118]: I0121 00:14:10.567290 5118 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7e92bb2-2266-40d8-99a3-c8d004628117-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 00:14:10 crc kubenswrapper[5118]: I0121 00:14:10.567308 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xq9d4\" (UniqueName: \"kubernetes.io/projected/7a196653-53d7-403d-972a-b3c1dc8c0cb9-kube-api-access-xq9d4\") on node \"crc\" DevicePath \"\"" Jan 21 00:14:10 crc kubenswrapper[5118]: I0121 00:14:10.567327 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-prldv\" (UniqueName: \"kubernetes.io/projected/c7e92bb2-2266-40d8-99a3-c8d004628117-kube-api-access-prldv\") on node \"crc\" DevicePath \"\"" Jan 21 00:14:10 crc kubenswrapper[5118]: I0121 00:14:10.568066 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0c195d86-c61b-4cdd-be28-bc64d7f39297-tmp\") pod \"route-controller-manager-84c66bb6b6-hx672\" (UID: \"0c195d86-c61b-4cdd-be28-bc64d7f39297\") " pod="openshift-route-controller-manager/route-controller-manager-84c66bb6b6-hx672" Jan 21 00:14:10 crc kubenswrapper[5118]: I0121 00:14:10.568845 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0c195d86-c61b-4cdd-be28-bc64d7f39297-client-ca\") pod \"route-controller-manager-84c66bb6b6-hx672\" (UID: \"0c195d86-c61b-4cdd-be28-bc64d7f39297\") " pod="openshift-route-controller-manager/route-controller-manager-84c66bb6b6-hx672" Jan 21 00:14:10 crc kubenswrapper[5118]: I0121 00:14:10.569057 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c195d86-c61b-4cdd-be28-bc64d7f39297-config\") pod \"route-controller-manager-84c66bb6b6-hx672\" (UID: \"0c195d86-c61b-4cdd-be28-bc64d7f39297\") " pod="openshift-route-controller-manager/route-controller-manager-84c66bb6b6-hx672" Jan 21 00:14:10 crc kubenswrapper[5118]: I0121 00:14:10.574086 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0c195d86-c61b-4cdd-be28-bc64d7f39297-serving-cert\") pod \"route-controller-manager-84c66bb6b6-hx672\" (UID: \"0c195d86-c61b-4cdd-be28-bc64d7f39297\") " pod="openshift-route-controller-manager/route-controller-manager-84c66bb6b6-hx672" Jan 21 00:14:10 crc kubenswrapper[5118]: I0121 00:14:10.586636 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgk8p\" (UniqueName: \"kubernetes.io/projected/0c195d86-c61b-4cdd-be28-bc64d7f39297-kube-api-access-tgk8p\") pod \"route-controller-manager-84c66bb6b6-hx672\" (UID: \"0c195d86-c61b-4cdd-be28-bc64d7f39297\") " pod="openshift-route-controller-manager/route-controller-manager-84c66bb6b6-hx672" Jan 21 00:14:10 crc kubenswrapper[5118]: I0121 00:14:10.668483 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pch4v\" (UniqueName: \"kubernetes.io/projected/b53735f9-cfc6-40e8-9ed0-98e07b6f60e6-kube-api-access-pch4v\") pod \"controller-manager-588787f94b-rwxsm\" (UID: \"b53735f9-cfc6-40e8-9ed0-98e07b6f60e6\") " pod="openshift-controller-manager/controller-manager-588787f94b-rwxsm" Jan 21 00:14:10 crc kubenswrapper[5118]: I0121 00:14:10.668658 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b53735f9-cfc6-40e8-9ed0-98e07b6f60e6-client-ca\") pod \"controller-manager-588787f94b-rwxsm\" (UID: \"b53735f9-cfc6-40e8-9ed0-98e07b6f60e6\") " pod="openshift-controller-manager/controller-manager-588787f94b-rwxsm" Jan 21 00:14:10 crc kubenswrapper[5118]: I0121 00:14:10.668757 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b53735f9-cfc6-40e8-9ed0-98e07b6f60e6-proxy-ca-bundles\") pod \"controller-manager-588787f94b-rwxsm\" (UID: \"b53735f9-cfc6-40e8-9ed0-98e07b6f60e6\") " pod="openshift-controller-manager/controller-manager-588787f94b-rwxsm" Jan 21 00:14:10 crc kubenswrapper[5118]: I0121 00:14:10.670316 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b53735f9-cfc6-40e8-9ed0-98e07b6f60e6-client-ca\") pod \"controller-manager-588787f94b-rwxsm\" (UID: \"b53735f9-cfc6-40e8-9ed0-98e07b6f60e6\") " pod="openshift-controller-manager/controller-manager-588787f94b-rwxsm" Jan 21 00:14:10 crc kubenswrapper[5118]: I0121 00:14:10.670766 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b53735f9-cfc6-40e8-9ed0-98e07b6f60e6-proxy-ca-bundles\") pod \"controller-manager-588787f94b-rwxsm\" (UID: \"b53735f9-cfc6-40e8-9ed0-98e07b6f60e6\") " pod="openshift-controller-manager/controller-manager-588787f94b-rwxsm" Jan 21 00:14:10 crc kubenswrapper[5118]: I0121 00:14:10.670921 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b53735f9-cfc6-40e8-9ed0-98e07b6f60e6-serving-cert\") pod \"controller-manager-588787f94b-rwxsm\" (UID: \"b53735f9-cfc6-40e8-9ed0-98e07b6f60e6\") " pod="openshift-controller-manager/controller-manager-588787f94b-rwxsm" Jan 21 00:14:10 crc kubenswrapper[5118]: I0121 00:14:10.671852 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b53735f9-cfc6-40e8-9ed0-98e07b6f60e6-config\") pod \"controller-manager-588787f94b-rwxsm\" (UID: \"b53735f9-cfc6-40e8-9ed0-98e07b6f60e6\") " pod="openshift-controller-manager/controller-manager-588787f94b-rwxsm" Jan 21 00:14:10 crc kubenswrapper[5118]: I0121 00:14:10.672230 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b53735f9-cfc6-40e8-9ed0-98e07b6f60e6-tmp\") pod \"controller-manager-588787f94b-rwxsm\" (UID: \"b53735f9-cfc6-40e8-9ed0-98e07b6f60e6\") " pod="openshift-controller-manager/controller-manager-588787f94b-rwxsm" Jan 21 00:14:10 crc kubenswrapper[5118]: I0121 00:14:10.674820 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b53735f9-cfc6-40e8-9ed0-98e07b6f60e6-serving-cert\") pod \"controller-manager-588787f94b-rwxsm\" (UID: \"b53735f9-cfc6-40e8-9ed0-98e07b6f60e6\") " pod="openshift-controller-manager/controller-manager-588787f94b-rwxsm" Jan 21 00:14:10 crc kubenswrapper[5118]: I0121 00:14:10.674919 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b53735f9-cfc6-40e8-9ed0-98e07b6f60e6-tmp\") pod \"controller-manager-588787f94b-rwxsm\" (UID: \"b53735f9-cfc6-40e8-9ed0-98e07b6f60e6\") " pod="openshift-controller-manager/controller-manager-588787f94b-rwxsm" Jan 21 00:14:10 crc kubenswrapper[5118]: I0121 00:14:10.674932 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b53735f9-cfc6-40e8-9ed0-98e07b6f60e6-config\") pod \"controller-manager-588787f94b-rwxsm\" (UID: \"b53735f9-cfc6-40e8-9ed0-98e07b6f60e6\") " pod="openshift-controller-manager/controller-manager-588787f94b-rwxsm" Jan 21 00:14:10 crc kubenswrapper[5118]: I0121 00:14:10.690590 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pch4v\" (UniqueName: \"kubernetes.io/projected/b53735f9-cfc6-40e8-9ed0-98e07b6f60e6-kube-api-access-pch4v\") pod \"controller-manager-588787f94b-rwxsm\" (UID: \"b53735f9-cfc6-40e8-9ed0-98e07b6f60e6\") " pod="openshift-controller-manager/controller-manager-588787f94b-rwxsm" Jan 21 00:14:10 crc kubenswrapper[5118]: I0121 00:14:10.757318 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-84c66bb6b6-hx672" Jan 21 00:14:10 crc kubenswrapper[5118]: I0121 00:14:10.797502 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-588787f94b-rwxsm" Jan 21 00:14:10 crc kubenswrapper[5118]: I0121 00:14:10.956093 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84c66bb6b6-hx672"] Jan 21 00:14:10 crc kubenswrapper[5118]: W0121 00:14:10.958841 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0c195d86_c61b_4cdd_be28_bc64d7f39297.slice/crio-53ae1dc8a9a9da375b862747c53caacb58ce85a41f62a351435df96392111c32 WatchSource:0}: Error finding container 53ae1dc8a9a9da375b862747c53caacb58ce85a41f62a351435df96392111c32: Status 404 returned error can't find the container with id 53ae1dc8a9a9da375b862747c53caacb58ce85a41f62a351435df96392111c32 Jan 21 00:14:10 crc kubenswrapper[5118]: I0121 00:14:10.992973 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-588787f94b-rwxsm"] Jan 21 00:14:11 crc kubenswrapper[5118]: W0121 00:14:11.011011 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb53735f9_cfc6_40e8_9ed0_98e07b6f60e6.slice/crio-f7d4d30f244881731f22f4922e8b6361eaf2fc92c7ce39224d450446e148c46f WatchSource:0}: Error finding container f7d4d30f244881731f22f4922e8b6361eaf2fc92c7ce39224d450446e148c46f: Status 404 returned error can't find the container with id f7d4d30f244881731f22f4922e8b6361eaf2fc92c7ce39224d450446e148c46f Jan 21 00:14:11 crc kubenswrapper[5118]: I0121 00:14:11.020305 5118 generic.go:358] "Generic (PLEG): container finished" podID="c7e92bb2-2266-40d8-99a3-c8d004628117" containerID="c8ec37321d48e4dabd68313cf856b2c0746a59a2dd65a88baa4ea6b91b59dced" exitCode=0 Jan 21 00:14:11 crc kubenswrapper[5118]: I0121 00:14:11.020413 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-f84876cb9-hvz2j" event={"ID":"c7e92bb2-2266-40d8-99a3-c8d004628117","Type":"ContainerDied","Data":"c8ec37321d48e4dabd68313cf856b2c0746a59a2dd65a88baa4ea6b91b59dced"} Jan 21 00:14:11 crc kubenswrapper[5118]: I0121 00:14:11.020446 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-f84876cb9-hvz2j" event={"ID":"c7e92bb2-2266-40d8-99a3-c8d004628117","Type":"ContainerDied","Data":"082341654c80228d604ad206f1369533f93b460d9e88bb807e43434cf0d24820"} Jan 21 00:14:11 crc kubenswrapper[5118]: I0121 00:14:11.020466 5118 scope.go:117] "RemoveContainer" containerID="c8ec37321d48e4dabd68313cf856b2c0746a59a2dd65a88baa4ea6b91b59dced" Jan 21 00:14:11 crc kubenswrapper[5118]: I0121 00:14:11.020637 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-f84876cb9-hvz2j" Jan 21 00:14:11 crc kubenswrapper[5118]: I0121 00:14:11.025611 5118 generic.go:358] "Generic (PLEG): container finished" podID="7a196653-53d7-403d-972a-b3c1dc8c0cb9" containerID="a4bc56fb620a2ac754e629711a463493a84b604927e6f19f38e3eb4c2bb4c1a3" exitCode=0 Jan 21 00:14:11 crc kubenswrapper[5118]: I0121 00:14:11.025720 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5c8896fddf-w2m2v" Jan 21 00:14:11 crc kubenswrapper[5118]: I0121 00:14:11.025756 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5c8896fddf-w2m2v" event={"ID":"7a196653-53d7-403d-972a-b3c1dc8c0cb9","Type":"ContainerDied","Data":"a4bc56fb620a2ac754e629711a463493a84b604927e6f19f38e3eb4c2bb4c1a3"} Jan 21 00:14:11 crc kubenswrapper[5118]: I0121 00:14:11.025783 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5c8896fddf-w2m2v" event={"ID":"7a196653-53d7-403d-972a-b3c1dc8c0cb9","Type":"ContainerDied","Data":"3a9a0b9360ce52cd168445ae2e19698d95775b1d283b99132481a76536676f8a"} Jan 21 00:14:11 crc kubenswrapper[5118]: I0121 00:14:11.027028 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-84c66bb6b6-hx672" event={"ID":"0c195d86-c61b-4cdd-be28-bc64d7f39297","Type":"ContainerStarted","Data":"53ae1dc8a9a9da375b862747c53caacb58ce85a41f62a351435df96392111c32"} Jan 21 00:14:11 crc kubenswrapper[5118]: I0121 00:14:11.049800 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5c8896fddf-w2m2v"] Jan 21 00:14:11 crc kubenswrapper[5118]: I0121 00:14:11.055358 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5c8896fddf-w2m2v"] Jan 21 00:14:11 crc kubenswrapper[5118]: I0121 00:14:11.057094 5118 scope.go:117] "RemoveContainer" containerID="c8ec37321d48e4dabd68313cf856b2c0746a59a2dd65a88baa4ea6b91b59dced" Jan 21 00:14:11 crc kubenswrapper[5118]: E0121 00:14:11.057897 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8ec37321d48e4dabd68313cf856b2c0746a59a2dd65a88baa4ea6b91b59dced\": container with ID starting with c8ec37321d48e4dabd68313cf856b2c0746a59a2dd65a88baa4ea6b91b59dced not found: ID does not exist" containerID="c8ec37321d48e4dabd68313cf856b2c0746a59a2dd65a88baa4ea6b91b59dced" Jan 21 00:14:11 crc kubenswrapper[5118]: I0121 00:14:11.057973 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8ec37321d48e4dabd68313cf856b2c0746a59a2dd65a88baa4ea6b91b59dced"} err="failed to get container status \"c8ec37321d48e4dabd68313cf856b2c0746a59a2dd65a88baa4ea6b91b59dced\": rpc error: code = NotFound desc = could not find container \"c8ec37321d48e4dabd68313cf856b2c0746a59a2dd65a88baa4ea6b91b59dced\": container with ID starting with c8ec37321d48e4dabd68313cf856b2c0746a59a2dd65a88baa4ea6b91b59dced not found: ID does not exist" Jan 21 00:14:11 crc kubenswrapper[5118]: I0121 00:14:11.058019 5118 scope.go:117] "RemoveContainer" containerID="a4bc56fb620a2ac754e629711a463493a84b604927e6f19f38e3eb4c2bb4c1a3" Jan 21 00:14:11 crc kubenswrapper[5118]: I0121 00:14:11.064641 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f84876cb9-hvz2j"] Jan 21 00:14:11 crc kubenswrapper[5118]: I0121 00:14:11.068842 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f84876cb9-hvz2j"] Jan 21 00:14:11 crc kubenswrapper[5118]: I0121 00:14:11.071036 5118 scope.go:117] "RemoveContainer" containerID="a4bc56fb620a2ac754e629711a463493a84b604927e6f19f38e3eb4c2bb4c1a3" Jan 21 00:14:11 crc kubenswrapper[5118]: E0121 00:14:11.071720 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4bc56fb620a2ac754e629711a463493a84b604927e6f19f38e3eb4c2bb4c1a3\": container with ID starting with a4bc56fb620a2ac754e629711a463493a84b604927e6f19f38e3eb4c2bb4c1a3 not found: ID does not exist" containerID="a4bc56fb620a2ac754e629711a463493a84b604927e6f19f38e3eb4c2bb4c1a3" Jan 21 00:14:11 crc kubenswrapper[5118]: I0121 00:14:11.071765 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4bc56fb620a2ac754e629711a463493a84b604927e6f19f38e3eb4c2bb4c1a3"} err="failed to get container status \"a4bc56fb620a2ac754e629711a463493a84b604927e6f19f38e3eb4c2bb4c1a3\": rpc error: code = NotFound desc = could not find container \"a4bc56fb620a2ac754e629711a463493a84b604927e6f19f38e3eb4c2bb4c1a3\": container with ID starting with a4bc56fb620a2ac754e629711a463493a84b604927e6f19f38e3eb4c2bb4c1a3 not found: ID does not exist" Jan 21 00:14:12 crc kubenswrapper[5118]: I0121 00:14:12.033073 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-84c66bb6b6-hx672" event={"ID":"0c195d86-c61b-4cdd-be28-bc64d7f39297","Type":"ContainerStarted","Data":"0fa2c056f09ed707afbc657aa2ecf3d1dfb87811e62b51102bba168e8400058e"} Jan 21 00:14:12 crc kubenswrapper[5118]: I0121 00:14:12.033336 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-84c66bb6b6-hx672" Jan 21 00:14:12 crc kubenswrapper[5118]: I0121 00:14:12.035149 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-588787f94b-rwxsm" event={"ID":"b53735f9-cfc6-40e8-9ed0-98e07b6f60e6","Type":"ContainerStarted","Data":"089da6e24eb2e104359a66bfe529ac2785b4c0d30628c5b04ae2857045da28d7"} Jan 21 00:14:12 crc kubenswrapper[5118]: I0121 00:14:12.035217 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-588787f94b-rwxsm" event={"ID":"b53735f9-cfc6-40e8-9ed0-98e07b6f60e6","Type":"ContainerStarted","Data":"f7d4d30f244881731f22f4922e8b6361eaf2fc92c7ce39224d450446e148c46f"} Jan 21 00:14:12 crc kubenswrapper[5118]: I0121 00:14:12.035243 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-588787f94b-rwxsm" Jan 21 00:14:12 crc kubenswrapper[5118]: I0121 00:14:12.040562 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-588787f94b-rwxsm" Jan 21 00:14:12 crc kubenswrapper[5118]: I0121 00:14:12.042477 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-84c66bb6b6-hx672" Jan 21 00:14:12 crc kubenswrapper[5118]: I0121 00:14:12.054933 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-84c66bb6b6-hx672" podStartSLOduration=4.054915527 podStartE2EDuration="4.054915527s" podCreationTimestamp="2026-01-21 00:14:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:14:12.053449948 +0000 UTC m=+307.377696976" watchObservedRunningTime="2026-01-21 00:14:12.054915527 +0000 UTC m=+307.379162545" Jan 21 00:14:12 crc kubenswrapper[5118]: I0121 00:14:12.088453 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-588787f94b-rwxsm" podStartSLOduration=4.088432645 podStartE2EDuration="4.088432645s" podCreationTimestamp="2026-01-21 00:14:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:14:12.084704545 +0000 UTC m=+307.408951563" watchObservedRunningTime="2026-01-21 00:14:12.088432645 +0000 UTC m=+307.412679663" Jan 21 00:14:12 crc kubenswrapper[5118]: I0121 00:14:12.981972 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a196653-53d7-403d-972a-b3c1dc8c0cb9" path="/var/lib/kubelet/pods/7a196653-53d7-403d-972a-b3c1dc8c0cb9/volumes" Jan 21 00:14:12 crc kubenswrapper[5118]: I0121 00:14:12.982857 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7e92bb2-2266-40d8-99a3-c8d004628117" path="/var/lib/kubelet/pods/c7e92bb2-2266-40d8-99a3-c8d004628117/volumes" Jan 21 00:14:16 crc kubenswrapper[5118]: I0121 00:14:16.091084 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-588787f94b-rwxsm"] Jan 21 00:14:16 crc kubenswrapper[5118]: I0121 00:14:16.091608 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-588787f94b-rwxsm" podUID="b53735f9-cfc6-40e8-9ed0-98e07b6f60e6" containerName="controller-manager" containerID="cri-o://089da6e24eb2e104359a66bfe529ac2785b4c0d30628c5b04ae2857045da28d7" gracePeriod=30 Jan 21 00:14:16 crc kubenswrapper[5118]: I0121 00:14:16.108220 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84c66bb6b6-hx672"] Jan 21 00:14:16 crc kubenswrapper[5118]: I0121 00:14:16.108494 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-84c66bb6b6-hx672" podUID="0c195d86-c61b-4cdd-be28-bc64d7f39297" containerName="route-controller-manager" containerID="cri-o://0fa2c056f09ed707afbc657aa2ecf3d1dfb87811e62b51102bba168e8400058e" gracePeriod=30 Jan 21 00:14:16 crc kubenswrapper[5118]: I0121 00:14:16.453030 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-588787f94b-rwxsm" Jan 21 00:14:16 crc kubenswrapper[5118]: I0121 00:14:16.458450 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-84c66bb6b6-hx672" Jan 21 00:14:16 crc kubenswrapper[5118]: I0121 00:14:16.481441 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-765c599d67-96mkg"] Jan 21 00:14:16 crc kubenswrapper[5118]: I0121 00:14:16.486785 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0c195d86-c61b-4cdd-be28-bc64d7f39297" containerName="route-controller-manager" Jan 21 00:14:16 crc kubenswrapper[5118]: I0121 00:14:16.486963 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c195d86-c61b-4cdd-be28-bc64d7f39297" containerName="route-controller-manager" Jan 21 00:14:16 crc kubenswrapper[5118]: I0121 00:14:16.487029 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b53735f9-cfc6-40e8-9ed0-98e07b6f60e6" containerName="controller-manager" Jan 21 00:14:16 crc kubenswrapper[5118]: I0121 00:14:16.487080 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="b53735f9-cfc6-40e8-9ed0-98e07b6f60e6" containerName="controller-manager" Jan 21 00:14:16 crc kubenswrapper[5118]: I0121 00:14:16.487222 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="0c195d86-c61b-4cdd-be28-bc64d7f39297" containerName="route-controller-manager" Jan 21 00:14:16 crc kubenswrapper[5118]: I0121 00:14:16.487301 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="b53735f9-cfc6-40e8-9ed0-98e07b6f60e6" containerName="controller-manager" Jan 21 00:14:16 crc kubenswrapper[5118]: I0121 00:14:16.494783 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-765c599d67-96mkg"] Jan 21 00:14:16 crc kubenswrapper[5118]: I0121 00:14:16.495084 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-765c599d67-96mkg" Jan 21 00:14:16 crc kubenswrapper[5118]: I0121 00:14:16.503151 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68f4794877-bj9qq"] Jan 21 00:14:16 crc kubenswrapper[5118]: I0121 00:14:16.508893 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-68f4794877-bj9qq" Jan 21 00:14:16 crc kubenswrapper[5118]: I0121 00:14:16.520825 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68f4794877-bj9qq"] Jan 21 00:14:16 crc kubenswrapper[5118]: I0121 00:14:16.548149 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c195d86-c61b-4cdd-be28-bc64d7f39297-config\") pod \"0c195d86-c61b-4cdd-be28-bc64d7f39297\" (UID: \"0c195d86-c61b-4cdd-be28-bc64d7f39297\") " Jan 21 00:14:16 crc kubenswrapper[5118]: I0121 00:14:16.548218 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b53735f9-cfc6-40e8-9ed0-98e07b6f60e6-config\") pod \"b53735f9-cfc6-40e8-9ed0-98e07b6f60e6\" (UID: \"b53735f9-cfc6-40e8-9ed0-98e07b6f60e6\") " Jan 21 00:14:16 crc kubenswrapper[5118]: I0121 00:14:16.548237 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b53735f9-cfc6-40e8-9ed0-98e07b6f60e6-proxy-ca-bundles\") pod \"b53735f9-cfc6-40e8-9ed0-98e07b6f60e6\" (UID: \"b53735f9-cfc6-40e8-9ed0-98e07b6f60e6\") " Jan 21 00:14:16 crc kubenswrapper[5118]: I0121 00:14:16.548273 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b53735f9-cfc6-40e8-9ed0-98e07b6f60e6-tmp\") pod \"b53735f9-cfc6-40e8-9ed0-98e07b6f60e6\" (UID: \"b53735f9-cfc6-40e8-9ed0-98e07b6f60e6\") " Jan 21 00:14:16 crc kubenswrapper[5118]: I0121 00:14:16.548291 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b53735f9-cfc6-40e8-9ed0-98e07b6f60e6-serving-cert\") pod \"b53735f9-cfc6-40e8-9ed0-98e07b6f60e6\" (UID: \"b53735f9-cfc6-40e8-9ed0-98e07b6f60e6\") " Jan 21 00:14:16 crc kubenswrapper[5118]: I0121 00:14:16.548326 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pch4v\" (UniqueName: \"kubernetes.io/projected/b53735f9-cfc6-40e8-9ed0-98e07b6f60e6-kube-api-access-pch4v\") pod \"b53735f9-cfc6-40e8-9ed0-98e07b6f60e6\" (UID: \"b53735f9-cfc6-40e8-9ed0-98e07b6f60e6\") " Jan 21 00:14:16 crc kubenswrapper[5118]: I0121 00:14:16.548468 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0c195d86-c61b-4cdd-be28-bc64d7f39297-serving-cert\") pod \"0c195d86-c61b-4cdd-be28-bc64d7f39297\" (UID: \"0c195d86-c61b-4cdd-be28-bc64d7f39297\") " Jan 21 00:14:16 crc kubenswrapper[5118]: I0121 00:14:16.548579 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tgk8p\" (UniqueName: \"kubernetes.io/projected/0c195d86-c61b-4cdd-be28-bc64d7f39297-kube-api-access-tgk8p\") pod \"0c195d86-c61b-4cdd-be28-bc64d7f39297\" (UID: \"0c195d86-c61b-4cdd-be28-bc64d7f39297\") " Jan 21 00:14:16 crc kubenswrapper[5118]: I0121 00:14:16.548637 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0c195d86-c61b-4cdd-be28-bc64d7f39297-tmp\") pod \"0c195d86-c61b-4cdd-be28-bc64d7f39297\" (UID: \"0c195d86-c61b-4cdd-be28-bc64d7f39297\") " Jan 21 00:14:16 crc kubenswrapper[5118]: I0121 00:14:16.548654 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b53735f9-cfc6-40e8-9ed0-98e07b6f60e6-client-ca\") pod \"b53735f9-cfc6-40e8-9ed0-98e07b6f60e6\" (UID: \"b53735f9-cfc6-40e8-9ed0-98e07b6f60e6\") " Jan 21 00:14:16 crc kubenswrapper[5118]: I0121 00:14:16.548662 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c195d86-c61b-4cdd-be28-bc64d7f39297-config" (OuterVolumeSpecName: "config") pod "0c195d86-c61b-4cdd-be28-bc64d7f39297" (UID: "0c195d86-c61b-4cdd-be28-bc64d7f39297"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 00:14:16 crc kubenswrapper[5118]: I0121 00:14:16.548686 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0c195d86-c61b-4cdd-be28-bc64d7f39297-client-ca\") pod \"0c195d86-c61b-4cdd-be28-bc64d7f39297\" (UID: \"0c195d86-c61b-4cdd-be28-bc64d7f39297\") " Jan 21 00:14:16 crc kubenswrapper[5118]: I0121 00:14:16.548826 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c195d86-c61b-4cdd-be28-bc64d7f39297-tmp" (OuterVolumeSpecName: "tmp") pod "0c195d86-c61b-4cdd-be28-bc64d7f39297" (UID: "0c195d86-c61b-4cdd-be28-bc64d7f39297"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 00:14:16 crc kubenswrapper[5118]: I0121 00:14:16.548869 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58609d84-bc4a-4d86-b809-b325339ceded-config\") pod \"route-controller-manager-68f4794877-bj9qq\" (UID: \"58609d84-bc4a-4d86-b809-b325339ceded\") " pod="openshift-route-controller-manager/route-controller-manager-68f4794877-bj9qq" Jan 21 00:14:16 crc kubenswrapper[5118]: I0121 00:14:16.548927 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/92b258d3-fca5-41a8-ab0e-80d50a26db7b-proxy-ca-bundles\") pod \"controller-manager-765c599d67-96mkg\" (UID: \"92b258d3-fca5-41a8-ab0e-80d50a26db7b\") " pod="openshift-controller-manager/controller-manager-765c599d67-96mkg" Jan 21 00:14:16 crc kubenswrapper[5118]: I0121 00:14:16.548944 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b53735f9-cfc6-40e8-9ed0-98e07b6f60e6-tmp" (OuterVolumeSpecName: "tmp") pod "b53735f9-cfc6-40e8-9ed0-98e07b6f60e6" (UID: "b53735f9-cfc6-40e8-9ed0-98e07b6f60e6"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 00:14:16 crc kubenswrapper[5118]: I0121 00:14:16.548963 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/92b258d3-fca5-41a8-ab0e-80d50a26db7b-client-ca\") pod \"controller-manager-765c599d67-96mkg\" (UID: \"92b258d3-fca5-41a8-ab0e-80d50a26db7b\") " pod="openshift-controller-manager/controller-manager-765c599d67-96mkg" Jan 21 00:14:16 crc kubenswrapper[5118]: I0121 00:14:16.549033 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/58609d84-bc4a-4d86-b809-b325339ceded-serving-cert\") pod \"route-controller-manager-68f4794877-bj9qq\" (UID: \"58609d84-bc4a-4d86-b809-b325339ceded\") " pod="openshift-route-controller-manager/route-controller-manager-68f4794877-bj9qq" Jan 21 00:14:16 crc kubenswrapper[5118]: I0121 00:14:16.549050 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92b258d3-fca5-41a8-ab0e-80d50a26db7b-config\") pod \"controller-manager-765c599d67-96mkg\" (UID: \"92b258d3-fca5-41a8-ab0e-80d50a26db7b\") " pod="openshift-controller-manager/controller-manager-765c599d67-96mkg" Jan 21 00:14:16 crc kubenswrapper[5118]: I0121 00:14:16.549076 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/58609d84-bc4a-4d86-b809-b325339ceded-tmp\") pod \"route-controller-manager-68f4794877-bj9qq\" (UID: \"58609d84-bc4a-4d86-b809-b325339ceded\") " pod="openshift-route-controller-manager/route-controller-manager-68f4794877-bj9qq" Jan 21 00:14:16 crc kubenswrapper[5118]: I0121 00:14:16.549116 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/92b258d3-fca5-41a8-ab0e-80d50a26db7b-tmp\") pod \"controller-manager-765c599d67-96mkg\" (UID: \"92b258d3-fca5-41a8-ab0e-80d50a26db7b\") " pod="openshift-controller-manager/controller-manager-765c599d67-96mkg" Jan 21 00:14:16 crc kubenswrapper[5118]: I0121 00:14:16.549128 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c195d86-c61b-4cdd-be28-bc64d7f39297-client-ca" (OuterVolumeSpecName: "client-ca") pod "0c195d86-c61b-4cdd-be28-bc64d7f39297" (UID: "0c195d86-c61b-4cdd-be28-bc64d7f39297"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 00:14:16 crc kubenswrapper[5118]: I0121 00:14:16.549204 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/58609d84-bc4a-4d86-b809-b325339ceded-client-ca\") pod \"route-controller-manager-68f4794877-bj9qq\" (UID: \"58609d84-bc4a-4d86-b809-b325339ceded\") " pod="openshift-route-controller-manager/route-controller-manager-68f4794877-bj9qq" Jan 21 00:14:16 crc kubenswrapper[5118]: I0121 00:14:16.549235 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7gg9\" (UniqueName: \"kubernetes.io/projected/58609d84-bc4a-4d86-b809-b325339ceded-kube-api-access-v7gg9\") pod \"route-controller-manager-68f4794877-bj9qq\" (UID: \"58609d84-bc4a-4d86-b809-b325339ceded\") " pod="openshift-route-controller-manager/route-controller-manager-68f4794877-bj9qq" Jan 21 00:14:16 crc kubenswrapper[5118]: I0121 00:14:16.549281 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/92b258d3-fca5-41a8-ab0e-80d50a26db7b-serving-cert\") pod \"controller-manager-765c599d67-96mkg\" (UID: \"92b258d3-fca5-41a8-ab0e-80d50a26db7b\") " pod="openshift-controller-manager/controller-manager-765c599d67-96mkg" Jan 21 00:14:16 crc kubenswrapper[5118]: I0121 00:14:16.549353 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zlz2\" (UniqueName: \"kubernetes.io/projected/92b258d3-fca5-41a8-ab0e-80d50a26db7b-kube-api-access-2zlz2\") pod \"controller-manager-765c599d67-96mkg\" (UID: \"92b258d3-fca5-41a8-ab0e-80d50a26db7b\") " pod="openshift-controller-manager/controller-manager-765c599d67-96mkg" Jan 21 00:14:16 crc kubenswrapper[5118]: I0121 00:14:16.549557 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b53735f9-cfc6-40e8-9ed0-98e07b6f60e6-client-ca" (OuterVolumeSpecName: "client-ca") pod "b53735f9-cfc6-40e8-9ed0-98e07b6f60e6" (UID: "b53735f9-cfc6-40e8-9ed0-98e07b6f60e6"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 00:14:16 crc kubenswrapper[5118]: I0121 00:14:16.549652 5118 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0c195d86-c61b-4cdd-be28-bc64d7f39297-tmp\") on node \"crc\" DevicePath \"\"" Jan 21 00:14:16 crc kubenswrapper[5118]: I0121 00:14:16.549674 5118 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b53735f9-cfc6-40e8-9ed0-98e07b6f60e6-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 00:14:16 crc kubenswrapper[5118]: I0121 00:14:16.549687 5118 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0c195d86-c61b-4cdd-be28-bc64d7f39297-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 00:14:16 crc kubenswrapper[5118]: I0121 00:14:16.549698 5118 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c195d86-c61b-4cdd-be28-bc64d7f39297-config\") on node \"crc\" DevicePath \"\"" Jan 21 00:14:16 crc kubenswrapper[5118]: I0121 00:14:16.549710 5118 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b53735f9-cfc6-40e8-9ed0-98e07b6f60e6-tmp\") on node \"crc\" DevicePath \"\"" Jan 21 00:14:16 crc kubenswrapper[5118]: I0121 00:14:16.549929 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b53735f9-cfc6-40e8-9ed0-98e07b6f60e6-config" (OuterVolumeSpecName: "config") pod "b53735f9-cfc6-40e8-9ed0-98e07b6f60e6" (UID: "b53735f9-cfc6-40e8-9ed0-98e07b6f60e6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 00:14:16 crc kubenswrapper[5118]: I0121 00:14:16.549987 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b53735f9-cfc6-40e8-9ed0-98e07b6f60e6-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "b53735f9-cfc6-40e8-9ed0-98e07b6f60e6" (UID: "b53735f9-cfc6-40e8-9ed0-98e07b6f60e6"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 00:14:16 crc kubenswrapper[5118]: I0121 00:14:16.553862 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b53735f9-cfc6-40e8-9ed0-98e07b6f60e6-kube-api-access-pch4v" (OuterVolumeSpecName: "kube-api-access-pch4v") pod "b53735f9-cfc6-40e8-9ed0-98e07b6f60e6" (UID: "b53735f9-cfc6-40e8-9ed0-98e07b6f60e6"). InnerVolumeSpecName "kube-api-access-pch4v". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:14:16 crc kubenswrapper[5118]: I0121 00:14:16.553919 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c195d86-c61b-4cdd-be28-bc64d7f39297-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0c195d86-c61b-4cdd-be28-bc64d7f39297" (UID: "0c195d86-c61b-4cdd-be28-bc64d7f39297"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 00:14:16 crc kubenswrapper[5118]: I0121 00:14:16.554265 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c195d86-c61b-4cdd-be28-bc64d7f39297-kube-api-access-tgk8p" (OuterVolumeSpecName: "kube-api-access-tgk8p") pod "0c195d86-c61b-4cdd-be28-bc64d7f39297" (UID: "0c195d86-c61b-4cdd-be28-bc64d7f39297"). InnerVolumeSpecName "kube-api-access-tgk8p". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:14:16 crc kubenswrapper[5118]: I0121 00:14:16.557488 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b53735f9-cfc6-40e8-9ed0-98e07b6f60e6-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b53735f9-cfc6-40e8-9ed0-98e07b6f60e6" (UID: "b53735f9-cfc6-40e8-9ed0-98e07b6f60e6"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 00:14:16 crc kubenswrapper[5118]: I0121 00:14:16.650397 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2zlz2\" (UniqueName: \"kubernetes.io/projected/92b258d3-fca5-41a8-ab0e-80d50a26db7b-kube-api-access-2zlz2\") pod \"controller-manager-765c599d67-96mkg\" (UID: \"92b258d3-fca5-41a8-ab0e-80d50a26db7b\") " pod="openshift-controller-manager/controller-manager-765c599d67-96mkg" Jan 21 00:14:16 crc kubenswrapper[5118]: I0121 00:14:16.650656 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58609d84-bc4a-4d86-b809-b325339ceded-config\") pod \"route-controller-manager-68f4794877-bj9qq\" (UID: \"58609d84-bc4a-4d86-b809-b325339ceded\") " pod="openshift-route-controller-manager/route-controller-manager-68f4794877-bj9qq" Jan 21 00:14:16 crc kubenswrapper[5118]: I0121 00:14:16.650786 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/92b258d3-fca5-41a8-ab0e-80d50a26db7b-proxy-ca-bundles\") pod \"controller-manager-765c599d67-96mkg\" (UID: \"92b258d3-fca5-41a8-ab0e-80d50a26db7b\") " pod="openshift-controller-manager/controller-manager-765c599d67-96mkg" Jan 21 00:14:16 crc kubenswrapper[5118]: I0121 00:14:16.650856 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/92b258d3-fca5-41a8-ab0e-80d50a26db7b-client-ca\") pod \"controller-manager-765c599d67-96mkg\" (UID: \"92b258d3-fca5-41a8-ab0e-80d50a26db7b\") " pod="openshift-controller-manager/controller-manager-765c599d67-96mkg" Jan 21 00:14:16 crc kubenswrapper[5118]: I0121 00:14:16.650923 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/58609d84-bc4a-4d86-b809-b325339ceded-serving-cert\") pod \"route-controller-manager-68f4794877-bj9qq\" (UID: \"58609d84-bc4a-4d86-b809-b325339ceded\") " pod="openshift-route-controller-manager/route-controller-manager-68f4794877-bj9qq" Jan 21 00:14:16 crc kubenswrapper[5118]: I0121 00:14:16.650962 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92b258d3-fca5-41a8-ab0e-80d50a26db7b-config\") pod \"controller-manager-765c599d67-96mkg\" (UID: \"92b258d3-fca5-41a8-ab0e-80d50a26db7b\") " pod="openshift-controller-manager/controller-manager-765c599d67-96mkg" Jan 21 00:14:16 crc kubenswrapper[5118]: I0121 00:14:16.651008 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/58609d84-bc4a-4d86-b809-b325339ceded-tmp\") pod \"route-controller-manager-68f4794877-bj9qq\" (UID: \"58609d84-bc4a-4d86-b809-b325339ceded\") " pod="openshift-route-controller-manager/route-controller-manager-68f4794877-bj9qq" Jan 21 00:14:16 crc kubenswrapper[5118]: I0121 00:14:16.651074 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/92b258d3-fca5-41a8-ab0e-80d50a26db7b-tmp\") pod \"controller-manager-765c599d67-96mkg\" (UID: \"92b258d3-fca5-41a8-ab0e-80d50a26db7b\") " pod="openshift-controller-manager/controller-manager-765c599d67-96mkg" Jan 21 00:14:16 crc kubenswrapper[5118]: I0121 00:14:16.651141 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/58609d84-bc4a-4d86-b809-b325339ceded-client-ca\") pod \"route-controller-manager-68f4794877-bj9qq\" (UID: \"58609d84-bc4a-4d86-b809-b325339ceded\") " pod="openshift-route-controller-manager/route-controller-manager-68f4794877-bj9qq" Jan 21 00:14:16 crc kubenswrapper[5118]: I0121 00:14:16.651223 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-v7gg9\" (UniqueName: \"kubernetes.io/projected/58609d84-bc4a-4d86-b809-b325339ceded-kube-api-access-v7gg9\") pod \"route-controller-manager-68f4794877-bj9qq\" (UID: \"58609d84-bc4a-4d86-b809-b325339ceded\") " pod="openshift-route-controller-manager/route-controller-manager-68f4794877-bj9qq" Jan 21 00:14:16 crc kubenswrapper[5118]: I0121 00:14:16.651266 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/92b258d3-fca5-41a8-ab0e-80d50a26db7b-serving-cert\") pod \"controller-manager-765c599d67-96mkg\" (UID: \"92b258d3-fca5-41a8-ab0e-80d50a26db7b\") " pod="openshift-controller-manager/controller-manager-765c599d67-96mkg" Jan 21 00:14:16 crc kubenswrapper[5118]: I0121 00:14:16.651377 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tgk8p\" (UniqueName: \"kubernetes.io/projected/0c195d86-c61b-4cdd-be28-bc64d7f39297-kube-api-access-tgk8p\") on node \"crc\" DevicePath \"\"" Jan 21 00:14:16 crc kubenswrapper[5118]: I0121 00:14:16.651400 5118 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b53735f9-cfc6-40e8-9ed0-98e07b6f60e6-config\") on node \"crc\" DevicePath \"\"" Jan 21 00:14:16 crc kubenswrapper[5118]: I0121 00:14:16.651417 5118 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b53735f9-cfc6-40e8-9ed0-98e07b6f60e6-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 00:14:16 crc kubenswrapper[5118]: I0121 00:14:16.651434 5118 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b53735f9-cfc6-40e8-9ed0-98e07b6f60e6-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 00:14:16 crc kubenswrapper[5118]: I0121 00:14:16.651450 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pch4v\" (UniqueName: \"kubernetes.io/projected/b53735f9-cfc6-40e8-9ed0-98e07b6f60e6-kube-api-access-pch4v\") on node \"crc\" DevicePath \"\"" Jan 21 00:14:16 crc kubenswrapper[5118]: I0121 00:14:16.651466 5118 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0c195d86-c61b-4cdd-be28-bc64d7f39297-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 00:14:16 crc kubenswrapper[5118]: I0121 00:14:16.651673 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58609d84-bc4a-4d86-b809-b325339ceded-config\") pod \"route-controller-manager-68f4794877-bj9qq\" (UID: \"58609d84-bc4a-4d86-b809-b325339ceded\") " pod="openshift-route-controller-manager/route-controller-manager-68f4794877-bj9qq" Jan 21 00:14:16 crc kubenswrapper[5118]: I0121 00:14:16.651753 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/92b258d3-fca5-41a8-ab0e-80d50a26db7b-client-ca\") pod \"controller-manager-765c599d67-96mkg\" (UID: \"92b258d3-fca5-41a8-ab0e-80d50a26db7b\") " pod="openshift-controller-manager/controller-manager-765c599d67-96mkg" Jan 21 00:14:16 crc kubenswrapper[5118]: I0121 00:14:16.651846 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/92b258d3-fca5-41a8-ab0e-80d50a26db7b-tmp\") pod \"controller-manager-765c599d67-96mkg\" (UID: \"92b258d3-fca5-41a8-ab0e-80d50a26db7b\") " pod="openshift-controller-manager/controller-manager-765c599d67-96mkg" Jan 21 00:14:16 crc kubenswrapper[5118]: I0121 00:14:16.652076 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/58609d84-bc4a-4d86-b809-b325339ceded-tmp\") pod \"route-controller-manager-68f4794877-bj9qq\" (UID: \"58609d84-bc4a-4d86-b809-b325339ceded\") " pod="openshift-route-controller-manager/route-controller-manager-68f4794877-bj9qq" Jan 21 00:14:16 crc kubenswrapper[5118]: I0121 00:14:16.652531 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/92b258d3-fca5-41a8-ab0e-80d50a26db7b-proxy-ca-bundles\") pod \"controller-manager-765c599d67-96mkg\" (UID: \"92b258d3-fca5-41a8-ab0e-80d50a26db7b\") " pod="openshift-controller-manager/controller-manager-765c599d67-96mkg" Jan 21 00:14:16 crc kubenswrapper[5118]: I0121 00:14:16.652642 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92b258d3-fca5-41a8-ab0e-80d50a26db7b-config\") pod \"controller-manager-765c599d67-96mkg\" (UID: \"92b258d3-fca5-41a8-ab0e-80d50a26db7b\") " pod="openshift-controller-manager/controller-manager-765c599d67-96mkg" Jan 21 00:14:16 crc kubenswrapper[5118]: I0121 00:14:16.652856 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/58609d84-bc4a-4d86-b809-b325339ceded-client-ca\") pod \"route-controller-manager-68f4794877-bj9qq\" (UID: \"58609d84-bc4a-4d86-b809-b325339ceded\") " pod="openshift-route-controller-manager/route-controller-manager-68f4794877-bj9qq" Jan 21 00:14:16 crc kubenswrapper[5118]: I0121 00:14:16.654799 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/58609d84-bc4a-4d86-b809-b325339ceded-serving-cert\") pod \"route-controller-manager-68f4794877-bj9qq\" (UID: \"58609d84-bc4a-4d86-b809-b325339ceded\") " pod="openshift-route-controller-manager/route-controller-manager-68f4794877-bj9qq" Jan 21 00:14:16 crc kubenswrapper[5118]: I0121 00:14:16.661599 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/92b258d3-fca5-41a8-ab0e-80d50a26db7b-serving-cert\") pod \"controller-manager-765c599d67-96mkg\" (UID: \"92b258d3-fca5-41a8-ab0e-80d50a26db7b\") " pod="openshift-controller-manager/controller-manager-765c599d67-96mkg" Jan 21 00:14:16 crc kubenswrapper[5118]: I0121 00:14:16.667209 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7gg9\" (UniqueName: \"kubernetes.io/projected/58609d84-bc4a-4d86-b809-b325339ceded-kube-api-access-v7gg9\") pod \"route-controller-manager-68f4794877-bj9qq\" (UID: \"58609d84-bc4a-4d86-b809-b325339ceded\") " pod="openshift-route-controller-manager/route-controller-manager-68f4794877-bj9qq" Jan 21 00:14:16 crc kubenswrapper[5118]: I0121 00:14:16.670418 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zlz2\" (UniqueName: \"kubernetes.io/projected/92b258d3-fca5-41a8-ab0e-80d50a26db7b-kube-api-access-2zlz2\") pod \"controller-manager-765c599d67-96mkg\" (UID: \"92b258d3-fca5-41a8-ab0e-80d50a26db7b\") " pod="openshift-controller-manager/controller-manager-765c599d67-96mkg" Jan 21 00:14:16 crc kubenswrapper[5118]: I0121 00:14:16.819198 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-765c599d67-96mkg" Jan 21 00:14:16 crc kubenswrapper[5118]: I0121 00:14:16.828667 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-68f4794877-bj9qq" Jan 21 00:14:17 crc kubenswrapper[5118]: I0121 00:14:17.030743 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68f4794877-bj9qq"] Jan 21 00:14:17 crc kubenswrapper[5118]: I0121 00:14:17.065772 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-765c599d67-96mkg"] Jan 21 00:14:17 crc kubenswrapper[5118]: I0121 00:14:17.069697 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-68f4794877-bj9qq" event={"ID":"58609d84-bc4a-4d86-b809-b325339ceded","Type":"ContainerStarted","Data":"e5133c8ca2b58c635c3cc302fa592c4a9ab1455bd4db35bacdfbb68c555e9b88"} Jan 21 00:14:17 crc kubenswrapper[5118]: I0121 00:14:17.071621 5118 generic.go:358] "Generic (PLEG): container finished" podID="0c195d86-c61b-4cdd-be28-bc64d7f39297" containerID="0fa2c056f09ed707afbc657aa2ecf3d1dfb87811e62b51102bba168e8400058e" exitCode=0 Jan 21 00:14:17 crc kubenswrapper[5118]: I0121 00:14:17.071724 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-84c66bb6b6-hx672" Jan 21 00:14:17 crc kubenswrapper[5118]: I0121 00:14:17.071678 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-84c66bb6b6-hx672" event={"ID":"0c195d86-c61b-4cdd-be28-bc64d7f39297","Type":"ContainerDied","Data":"0fa2c056f09ed707afbc657aa2ecf3d1dfb87811e62b51102bba168e8400058e"} Jan 21 00:14:17 crc kubenswrapper[5118]: I0121 00:14:17.071857 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-84c66bb6b6-hx672" event={"ID":"0c195d86-c61b-4cdd-be28-bc64d7f39297","Type":"ContainerDied","Data":"53ae1dc8a9a9da375b862747c53caacb58ce85a41f62a351435df96392111c32"} Jan 21 00:14:17 crc kubenswrapper[5118]: I0121 00:14:17.071884 5118 scope.go:117] "RemoveContainer" containerID="0fa2c056f09ed707afbc657aa2ecf3d1dfb87811e62b51102bba168e8400058e" Jan 21 00:14:17 crc kubenswrapper[5118]: I0121 00:14:17.078925 5118 generic.go:358] "Generic (PLEG): container finished" podID="b53735f9-cfc6-40e8-9ed0-98e07b6f60e6" containerID="089da6e24eb2e104359a66bfe529ac2785b4c0d30628c5b04ae2857045da28d7" exitCode=0 Jan 21 00:14:17 crc kubenswrapper[5118]: I0121 00:14:17.078984 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-588787f94b-rwxsm" event={"ID":"b53735f9-cfc6-40e8-9ed0-98e07b6f60e6","Type":"ContainerDied","Data":"089da6e24eb2e104359a66bfe529ac2785b4c0d30628c5b04ae2857045da28d7"} Jan 21 00:14:17 crc kubenswrapper[5118]: I0121 00:14:17.079001 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-588787f94b-rwxsm" Jan 21 00:14:17 crc kubenswrapper[5118]: I0121 00:14:17.079011 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-588787f94b-rwxsm" event={"ID":"b53735f9-cfc6-40e8-9ed0-98e07b6f60e6","Type":"ContainerDied","Data":"f7d4d30f244881731f22f4922e8b6361eaf2fc92c7ce39224d450446e148c46f"} Jan 21 00:14:17 crc kubenswrapper[5118]: I0121 00:14:17.111502 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84c66bb6b6-hx672"] Jan 21 00:14:17 crc kubenswrapper[5118]: I0121 00:14:17.117868 5118 scope.go:117] "RemoveContainer" containerID="0fa2c056f09ed707afbc657aa2ecf3d1dfb87811e62b51102bba168e8400058e" Jan 21 00:14:17 crc kubenswrapper[5118]: E0121 00:14:17.118829 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0fa2c056f09ed707afbc657aa2ecf3d1dfb87811e62b51102bba168e8400058e\": container with ID starting with 0fa2c056f09ed707afbc657aa2ecf3d1dfb87811e62b51102bba168e8400058e not found: ID does not exist" containerID="0fa2c056f09ed707afbc657aa2ecf3d1dfb87811e62b51102bba168e8400058e" Jan 21 00:14:17 crc kubenswrapper[5118]: I0121 00:14:17.118860 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0fa2c056f09ed707afbc657aa2ecf3d1dfb87811e62b51102bba168e8400058e"} err="failed to get container status \"0fa2c056f09ed707afbc657aa2ecf3d1dfb87811e62b51102bba168e8400058e\": rpc error: code = NotFound desc = could not find container \"0fa2c056f09ed707afbc657aa2ecf3d1dfb87811e62b51102bba168e8400058e\": container with ID starting with 0fa2c056f09ed707afbc657aa2ecf3d1dfb87811e62b51102bba168e8400058e not found: ID does not exist" Jan 21 00:14:17 crc kubenswrapper[5118]: I0121 00:14:17.118881 5118 scope.go:117] "RemoveContainer" containerID="089da6e24eb2e104359a66bfe529ac2785b4c0d30628c5b04ae2857045da28d7" Jan 21 00:14:17 crc kubenswrapper[5118]: I0121 00:14:17.121517 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84c66bb6b6-hx672"] Jan 21 00:14:17 crc kubenswrapper[5118]: I0121 00:14:17.125460 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-588787f94b-rwxsm"] Jan 21 00:14:17 crc kubenswrapper[5118]: I0121 00:14:17.129488 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-588787f94b-rwxsm"] Jan 21 00:14:17 crc kubenswrapper[5118]: I0121 00:14:17.131589 5118 scope.go:117] "RemoveContainer" containerID="089da6e24eb2e104359a66bfe529ac2785b4c0d30628c5b04ae2857045da28d7" Jan 21 00:14:17 crc kubenswrapper[5118]: E0121 00:14:17.132262 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"089da6e24eb2e104359a66bfe529ac2785b4c0d30628c5b04ae2857045da28d7\": container with ID starting with 089da6e24eb2e104359a66bfe529ac2785b4c0d30628c5b04ae2857045da28d7 not found: ID does not exist" containerID="089da6e24eb2e104359a66bfe529ac2785b4c0d30628c5b04ae2857045da28d7" Jan 21 00:14:17 crc kubenswrapper[5118]: I0121 00:14:17.132324 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"089da6e24eb2e104359a66bfe529ac2785b4c0d30628c5b04ae2857045da28d7"} err="failed to get container status \"089da6e24eb2e104359a66bfe529ac2785b4c0d30628c5b04ae2857045da28d7\": rpc error: code = NotFound desc = could not find container \"089da6e24eb2e104359a66bfe529ac2785b4c0d30628c5b04ae2857045da28d7\": container with ID starting with 089da6e24eb2e104359a66bfe529ac2785b4c0d30628c5b04ae2857045da28d7 not found: ID does not exist" Jan 21 00:14:17 crc kubenswrapper[5118]: I0121 00:14:17.544086 5118 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 21 00:14:18 crc kubenswrapper[5118]: I0121 00:14:18.087740 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-765c599d67-96mkg" event={"ID":"92b258d3-fca5-41a8-ab0e-80d50a26db7b","Type":"ContainerStarted","Data":"44d41c951009faaf8403ec57a7e4f420f02131f6f5ae3d25db4a9c7b7b240377"} Jan 21 00:14:18 crc kubenswrapper[5118]: I0121 00:14:18.088055 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-765c599d67-96mkg" Jan 21 00:14:18 crc kubenswrapper[5118]: I0121 00:14:18.088189 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-765c599d67-96mkg" event={"ID":"92b258d3-fca5-41a8-ab0e-80d50a26db7b","Type":"ContainerStarted","Data":"571120ecee06bc030913714454ec86b6d3bed63fa2c78527f81112a8c6758bf5"} Jan 21 00:14:18 crc kubenswrapper[5118]: I0121 00:14:18.089448 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-68f4794877-bj9qq" event={"ID":"58609d84-bc4a-4d86-b809-b325339ceded","Type":"ContainerStarted","Data":"e680e479c4e636b91fd2974243a1ba95ce4bd187e6b03032a82459784c0157cb"} Jan 21 00:14:18 crc kubenswrapper[5118]: I0121 00:14:18.089750 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-68f4794877-bj9qq" Jan 21 00:14:18 crc kubenswrapper[5118]: I0121 00:14:18.092573 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-765c599d67-96mkg" Jan 21 00:14:18 crc kubenswrapper[5118]: I0121 00:14:18.096538 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-68f4794877-bj9qq" Jan 21 00:14:18 crc kubenswrapper[5118]: I0121 00:14:18.105609 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-765c599d67-96mkg" podStartSLOduration=2.105590501 podStartE2EDuration="2.105590501s" podCreationTimestamp="2026-01-21 00:14:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:14:18.104679167 +0000 UTC m=+313.428926205" watchObservedRunningTime="2026-01-21 00:14:18.105590501 +0000 UTC m=+313.429837529" Jan 21 00:14:18 crc kubenswrapper[5118]: I0121 00:14:18.149750 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-68f4794877-bj9qq" podStartSLOduration=2.149727254 podStartE2EDuration="2.149727254s" podCreationTimestamp="2026-01-21 00:14:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:14:18.141567325 +0000 UTC m=+313.465814353" watchObservedRunningTime="2026-01-21 00:14:18.149727254 +0000 UTC m=+313.473974292" Jan 21 00:14:18 crc kubenswrapper[5118]: I0121 00:14:18.985072 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c195d86-c61b-4cdd-be28-bc64d7f39297" path="/var/lib/kubelet/pods/0c195d86-c61b-4cdd-be28-bc64d7f39297/volumes" Jan 21 00:14:18 crc kubenswrapper[5118]: I0121 00:14:18.985655 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b53735f9-cfc6-40e8-9ed0-98e07b6f60e6" path="/var/lib/kubelet/pods/b53735f9-cfc6-40e8-9ed0-98e07b6f60e6/volumes" Jan 21 00:14:42 crc kubenswrapper[5118]: I0121 00:14:42.615249 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6c5wr"] Jan 21 00:14:42 crc kubenswrapper[5118]: I0121 00:14:42.618522 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6c5wr" podUID="28172373-ad9f-4755-a060-b467a2817a67" containerName="registry-server" containerID="cri-o://71f5cae005056bcf6ab18ef5f90f6f3cfd163f83065921bc0194f42eb2ee2571" gracePeriod=30 Jan 21 00:14:42 crc kubenswrapper[5118]: I0121 00:14:42.621871 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-s5pql"] Jan 21 00:14:42 crc kubenswrapper[5118]: I0121 00:14:42.622218 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-s5pql" podUID="67f8120d-af6d-4e77-9772-0fc55dfad0bf" containerName="registry-server" containerID="cri-o://00fe3c213943c94d49f9c07ebca5629df4b6ccb2fab0e8cfe73f67a6fda85a09" gracePeriod=30 Jan 21 00:14:42 crc kubenswrapper[5118]: I0121 00:14:42.634788 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-5r9pr"] Jan 21 00:14:42 crc kubenswrapper[5118]: I0121 00:14:42.635383 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-547dbd544d-5r9pr" podUID="a651b3d8-5a1f-4da5-8d11-e6e3b1ef5d00" containerName="marketplace-operator" containerID="cri-o://7fc80a9d862859bc887a15b3aa15cd37e9cfe4c7c11072a59fada4a5f9114766" gracePeriod=30 Jan 21 00:14:42 crc kubenswrapper[5118]: I0121 00:14:42.646478 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4frhj"] Jan 21 00:14:42 crc kubenswrapper[5118]: I0121 00:14:42.646840 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4frhj" podUID="d18361c6-e5b6-44f0-b6d4-4dae1ff8c741" containerName="registry-server" containerID="cri-o://b612f5c41ec000be28e4c9c9896ae76218539305be25a1fd8cd030aa10e11d17" gracePeriod=30 Jan 21 00:14:42 crc kubenswrapper[5118]: I0121 00:14:42.659327 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-4wgxd"] Jan 21 00:14:42 crc kubenswrapper[5118]: I0121 00:14:42.663267 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-4wgxd" Jan 21 00:14:42 crc kubenswrapper[5118]: I0121 00:14:42.671579 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-94zfv"] Jan 21 00:14:42 crc kubenswrapper[5118]: I0121 00:14:42.672428 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-4wgxd"] Jan 21 00:14:42 crc kubenswrapper[5118]: I0121 00:14:42.672647 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-94zfv" podUID="8d65d512-7c64-462f-b40d-ad0252a88233" containerName="registry-server" containerID="cri-o://532226610bc3672e54c1a84b2f208a59250e8c5d82ca78ebcde74863bae441d3" gracePeriod=30 Jan 21 00:14:42 crc kubenswrapper[5118]: I0121 00:14:42.733848 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxg56\" (UniqueName: \"kubernetes.io/projected/602c053c-5e99-4f10-888b-0ea7a740a476-kube-api-access-cxg56\") pod \"marketplace-operator-547dbd544d-4wgxd\" (UID: \"602c053c-5e99-4f10-888b-0ea7a740a476\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-4wgxd" Jan 21 00:14:42 crc kubenswrapper[5118]: I0121 00:14:42.734129 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/602c053c-5e99-4f10-888b-0ea7a740a476-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-4wgxd\" (UID: \"602c053c-5e99-4f10-888b-0ea7a740a476\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-4wgxd" Jan 21 00:14:42 crc kubenswrapper[5118]: I0121 00:14:42.734361 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/602c053c-5e99-4f10-888b-0ea7a740a476-tmp\") pod \"marketplace-operator-547dbd544d-4wgxd\" (UID: \"602c053c-5e99-4f10-888b-0ea7a740a476\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-4wgxd" Jan 21 00:14:42 crc kubenswrapper[5118]: I0121 00:14:42.734503 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/602c053c-5e99-4f10-888b-0ea7a740a476-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-4wgxd\" (UID: \"602c053c-5e99-4f10-888b-0ea7a740a476\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-4wgxd" Jan 21 00:14:42 crc kubenswrapper[5118]: I0121 00:14:42.836343 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/602c053c-5e99-4f10-888b-0ea7a740a476-tmp\") pod \"marketplace-operator-547dbd544d-4wgxd\" (UID: \"602c053c-5e99-4f10-888b-0ea7a740a476\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-4wgxd" Jan 21 00:14:42 crc kubenswrapper[5118]: I0121 00:14:42.836966 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/602c053c-5e99-4f10-888b-0ea7a740a476-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-4wgxd\" (UID: \"602c053c-5e99-4f10-888b-0ea7a740a476\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-4wgxd" Jan 21 00:14:42 crc kubenswrapper[5118]: I0121 00:14:42.837040 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cxg56\" (UniqueName: \"kubernetes.io/projected/602c053c-5e99-4f10-888b-0ea7a740a476-kube-api-access-cxg56\") pod \"marketplace-operator-547dbd544d-4wgxd\" (UID: \"602c053c-5e99-4f10-888b-0ea7a740a476\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-4wgxd" Jan 21 00:14:42 crc kubenswrapper[5118]: I0121 00:14:42.837111 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/602c053c-5e99-4f10-888b-0ea7a740a476-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-4wgxd\" (UID: \"602c053c-5e99-4f10-888b-0ea7a740a476\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-4wgxd" Jan 21 00:14:42 crc kubenswrapper[5118]: I0121 00:14:42.837842 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/602c053c-5e99-4f10-888b-0ea7a740a476-tmp\") pod \"marketplace-operator-547dbd544d-4wgxd\" (UID: \"602c053c-5e99-4f10-888b-0ea7a740a476\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-4wgxd" Jan 21 00:14:42 crc kubenswrapper[5118]: I0121 00:14:42.838494 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/602c053c-5e99-4f10-888b-0ea7a740a476-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-4wgxd\" (UID: \"602c053c-5e99-4f10-888b-0ea7a740a476\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-4wgxd" Jan 21 00:14:42 crc kubenswrapper[5118]: I0121 00:14:42.853255 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/602c053c-5e99-4f10-888b-0ea7a740a476-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-4wgxd\" (UID: \"602c053c-5e99-4f10-888b-0ea7a740a476\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-4wgxd" Jan 21 00:14:42 crc kubenswrapper[5118]: I0121 00:14:42.856919 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxg56\" (UniqueName: \"kubernetes.io/projected/602c053c-5e99-4f10-888b-0ea7a740a476-kube-api-access-cxg56\") pod \"marketplace-operator-547dbd544d-4wgxd\" (UID: \"602c053c-5e99-4f10-888b-0ea7a740a476\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-4wgxd" Jan 21 00:14:42 crc kubenswrapper[5118]: I0121 00:14:42.890728 5118 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-5r9pr container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/healthz\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Jan 21 00:14:42 crc kubenswrapper[5118]: I0121 00:14:42.890784 5118 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-5r9pr" podUID="a651b3d8-5a1f-4da5-8d11-e6e3b1ef5d00" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.26:8080/healthz\": dial tcp 10.217.0.26:8080: connect: connection refused" Jan 21 00:14:43 crc kubenswrapper[5118]: I0121 00:14:43.045985 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-4wgxd" Jan 21 00:14:43 crc kubenswrapper[5118]: I0121 00:14:43.050569 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6c5wr" Jan 21 00:14:43 crc kubenswrapper[5118]: I0121 00:14:43.141813 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28172373-ad9f-4755-a060-b467a2817a67-catalog-content\") pod \"28172373-ad9f-4755-a060-b467a2817a67\" (UID: \"28172373-ad9f-4755-a060-b467a2817a67\") " Jan 21 00:14:43 crc kubenswrapper[5118]: I0121 00:14:43.142010 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28172373-ad9f-4755-a060-b467a2817a67-utilities\") pod \"28172373-ad9f-4755-a060-b467a2817a67\" (UID: \"28172373-ad9f-4755-a060-b467a2817a67\") " Jan 21 00:14:43 crc kubenswrapper[5118]: I0121 00:14:43.142057 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hnxbp\" (UniqueName: \"kubernetes.io/projected/28172373-ad9f-4755-a060-b467a2817a67-kube-api-access-hnxbp\") pod \"28172373-ad9f-4755-a060-b467a2817a67\" (UID: \"28172373-ad9f-4755-a060-b467a2817a67\") " Jan 21 00:14:43 crc kubenswrapper[5118]: I0121 00:14:43.144133 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28172373-ad9f-4755-a060-b467a2817a67-utilities" (OuterVolumeSpecName: "utilities") pod "28172373-ad9f-4755-a060-b467a2817a67" (UID: "28172373-ad9f-4755-a060-b467a2817a67"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 00:14:43 crc kubenswrapper[5118]: I0121 00:14:43.162496 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28172373-ad9f-4755-a060-b467a2817a67-kube-api-access-hnxbp" (OuterVolumeSpecName: "kube-api-access-hnxbp") pod "28172373-ad9f-4755-a060-b467a2817a67" (UID: "28172373-ad9f-4755-a060-b467a2817a67"). InnerVolumeSpecName "kube-api-access-hnxbp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:14:43 crc kubenswrapper[5118]: I0121 00:14:43.211037 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28172373-ad9f-4755-a060-b467a2817a67-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "28172373-ad9f-4755-a060-b467a2817a67" (UID: "28172373-ad9f-4755-a060-b467a2817a67"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 00:14:43 crc kubenswrapper[5118]: I0121 00:14:43.246996 5118 generic.go:358] "Generic (PLEG): container finished" podID="8d65d512-7c64-462f-b40d-ad0252a88233" containerID="532226610bc3672e54c1a84b2f208a59250e8c5d82ca78ebcde74863bae441d3" exitCode=0 Jan 21 00:14:43 crc kubenswrapper[5118]: I0121 00:14:43.247212 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-94zfv" event={"ID":"8d65d512-7c64-462f-b40d-ad0252a88233","Type":"ContainerDied","Data":"532226610bc3672e54c1a84b2f208a59250e8c5d82ca78ebcde74863bae441d3"} Jan 21 00:14:43 crc kubenswrapper[5118]: I0121 00:14:43.250203 5118 generic.go:358] "Generic (PLEG): container finished" podID="67f8120d-af6d-4e77-9772-0fc55dfad0bf" containerID="00fe3c213943c94d49f9c07ebca5629df4b6ccb2fab0e8cfe73f67a6fda85a09" exitCode=0 Jan 21 00:14:43 crc kubenswrapper[5118]: I0121 00:14:43.250423 5118 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28172373-ad9f-4755-a060-b467a2817a67-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 00:14:43 crc kubenswrapper[5118]: I0121 00:14:43.250453 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hnxbp\" (UniqueName: \"kubernetes.io/projected/28172373-ad9f-4755-a060-b467a2817a67-kube-api-access-hnxbp\") on node \"crc\" DevicePath \"\"" Jan 21 00:14:43 crc kubenswrapper[5118]: I0121 00:14:43.250465 5118 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28172373-ad9f-4755-a060-b467a2817a67-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 00:14:43 crc kubenswrapper[5118]: I0121 00:14:43.250427 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s5pql" event={"ID":"67f8120d-af6d-4e77-9772-0fc55dfad0bf","Type":"ContainerDied","Data":"00fe3c213943c94d49f9c07ebca5629df4b6ccb2fab0e8cfe73f67a6fda85a09"} Jan 21 00:14:43 crc kubenswrapper[5118]: I0121 00:14:43.252846 5118 generic.go:358] "Generic (PLEG): container finished" podID="d18361c6-e5b6-44f0-b6d4-4dae1ff8c741" containerID="b612f5c41ec000be28e4c9c9896ae76218539305be25a1fd8cd030aa10e11d17" exitCode=0 Jan 21 00:14:43 crc kubenswrapper[5118]: I0121 00:14:43.252903 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4frhj" event={"ID":"d18361c6-e5b6-44f0-b6d4-4dae1ff8c741","Type":"ContainerDied","Data":"b612f5c41ec000be28e4c9c9896ae76218539305be25a1fd8cd030aa10e11d17"} Jan 21 00:14:43 crc kubenswrapper[5118]: I0121 00:14:43.253950 5118 generic.go:358] "Generic (PLEG): container finished" podID="a651b3d8-5a1f-4da5-8d11-e6e3b1ef5d00" containerID="7fc80a9d862859bc887a15b3aa15cd37e9cfe4c7c11072a59fada4a5f9114766" exitCode=0 Jan 21 00:14:43 crc kubenswrapper[5118]: I0121 00:14:43.254016 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-5r9pr" event={"ID":"a651b3d8-5a1f-4da5-8d11-e6e3b1ef5d00","Type":"ContainerDied","Data":"7fc80a9d862859bc887a15b3aa15cd37e9cfe4c7c11072a59fada4a5f9114766"} Jan 21 00:14:43 crc kubenswrapper[5118]: I0121 00:14:43.254037 5118 scope.go:117] "RemoveContainer" containerID="3ff85f1d6300e9395787d48e93f1c0f2a1727898f093606856ee28c33b663611" Jan 21 00:14:43 crc kubenswrapper[5118]: I0121 00:14:43.262885 5118 generic.go:358] "Generic (PLEG): container finished" podID="28172373-ad9f-4755-a060-b467a2817a67" containerID="71f5cae005056bcf6ab18ef5f90f6f3cfd163f83065921bc0194f42eb2ee2571" exitCode=0 Jan 21 00:14:43 crc kubenswrapper[5118]: I0121 00:14:43.262933 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6c5wr" event={"ID":"28172373-ad9f-4755-a060-b467a2817a67","Type":"ContainerDied","Data":"71f5cae005056bcf6ab18ef5f90f6f3cfd163f83065921bc0194f42eb2ee2571"} Jan 21 00:14:43 crc kubenswrapper[5118]: I0121 00:14:43.262958 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6c5wr" event={"ID":"28172373-ad9f-4755-a060-b467a2817a67","Type":"ContainerDied","Data":"d8ab9e219a4a19d0af33e463e2dc0564e17bfaa3e59c850571275175d4b42aa9"} Jan 21 00:14:43 crc kubenswrapper[5118]: I0121 00:14:43.263035 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6c5wr" Jan 21 00:14:43 crc kubenswrapper[5118]: I0121 00:14:43.269713 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s5pql" Jan 21 00:14:43 crc kubenswrapper[5118]: I0121 00:14:43.274601 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4frhj" Jan 21 00:14:43 crc kubenswrapper[5118]: I0121 00:14:43.280267 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-5r9pr" Jan 21 00:14:43 crc kubenswrapper[5118]: I0121 00:14:43.288408 5118 scope.go:117] "RemoveContainer" containerID="71f5cae005056bcf6ab18ef5f90f6f3cfd163f83065921bc0194f42eb2ee2571" Jan 21 00:14:43 crc kubenswrapper[5118]: I0121 00:14:43.290513 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-94zfv" Jan 21 00:14:43 crc kubenswrapper[5118]: I0121 00:14:43.303273 5118 scope.go:117] "RemoveContainer" containerID="1bf31aadce08469a5109a87ba9ef45d3ddc653963f95982cc4574c34fdb52d36" Jan 21 00:14:43 crc kubenswrapper[5118]: I0121 00:14:43.338936 5118 scope.go:117] "RemoveContainer" containerID="b9875a0b11f299a2af24e66dff921ffd960c91393c8aa5db6efb7824925fb409" Jan 21 00:14:43 crc kubenswrapper[5118]: I0121 00:14:43.350992 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a651b3d8-5a1f-4da5-8d11-e6e3b1ef5d00-tmp\") pod \"a651b3d8-5a1f-4da5-8d11-e6e3b1ef5d00\" (UID: \"a651b3d8-5a1f-4da5-8d11-e6e3b1ef5d00\") " Jan 21 00:14:43 crc kubenswrapper[5118]: I0121 00:14:43.351076 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67f8120d-af6d-4e77-9772-0fc55dfad0bf-utilities\") pod \"67f8120d-af6d-4e77-9772-0fc55dfad0bf\" (UID: \"67f8120d-af6d-4e77-9772-0fc55dfad0bf\") " Jan 21 00:14:43 crc kubenswrapper[5118]: I0121 00:14:43.351110 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v879m\" (UniqueName: \"kubernetes.io/projected/67f8120d-af6d-4e77-9772-0fc55dfad0bf-kube-api-access-v879m\") pod \"67f8120d-af6d-4e77-9772-0fc55dfad0bf\" (UID: \"67f8120d-af6d-4e77-9772-0fc55dfad0bf\") " Jan 21 00:14:43 crc kubenswrapper[5118]: I0121 00:14:43.351213 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-877zw\" (UniqueName: \"kubernetes.io/projected/d18361c6-e5b6-44f0-b6d4-4dae1ff8c741-kube-api-access-877zw\") pod \"d18361c6-e5b6-44f0-b6d4-4dae1ff8c741\" (UID: \"d18361c6-e5b6-44f0-b6d4-4dae1ff8c741\") " Jan 21 00:14:43 crc kubenswrapper[5118]: I0121 00:14:43.351245 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a651b3d8-5a1f-4da5-8d11-e6e3b1ef5d00-marketplace-operator-metrics\") pod \"a651b3d8-5a1f-4da5-8d11-e6e3b1ef5d00\" (UID: \"a651b3d8-5a1f-4da5-8d11-e6e3b1ef5d00\") " Jan 21 00:14:43 crc kubenswrapper[5118]: I0121 00:14:43.351274 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xg6xt\" (UniqueName: \"kubernetes.io/projected/a651b3d8-5a1f-4da5-8d11-e6e3b1ef5d00-kube-api-access-xg6xt\") pod \"a651b3d8-5a1f-4da5-8d11-e6e3b1ef5d00\" (UID: \"a651b3d8-5a1f-4da5-8d11-e6e3b1ef5d00\") " Jan 21 00:14:43 crc kubenswrapper[5118]: I0121 00:14:43.351308 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a651b3d8-5a1f-4da5-8d11-e6e3b1ef5d00-marketplace-trusted-ca\") pod \"a651b3d8-5a1f-4da5-8d11-e6e3b1ef5d00\" (UID: \"a651b3d8-5a1f-4da5-8d11-e6e3b1ef5d00\") " Jan 21 00:14:43 crc kubenswrapper[5118]: I0121 00:14:43.351339 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d65d512-7c64-462f-b40d-ad0252a88233-catalog-content\") pod \"8d65d512-7c64-462f-b40d-ad0252a88233\" (UID: \"8d65d512-7c64-462f-b40d-ad0252a88233\") " Jan 21 00:14:43 crc kubenswrapper[5118]: I0121 00:14:43.351358 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d18361c6-e5b6-44f0-b6d4-4dae1ff8c741-catalog-content\") pod \"d18361c6-e5b6-44f0-b6d4-4dae1ff8c741\" (UID: \"d18361c6-e5b6-44f0-b6d4-4dae1ff8c741\") " Jan 21 00:14:43 crc kubenswrapper[5118]: I0121 00:14:43.351398 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d65d512-7c64-462f-b40d-ad0252a88233-utilities\") pod \"8d65d512-7c64-462f-b40d-ad0252a88233\" (UID: \"8d65d512-7c64-462f-b40d-ad0252a88233\") " Jan 21 00:14:43 crc kubenswrapper[5118]: I0121 00:14:43.351434 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bl6xq\" (UniqueName: \"kubernetes.io/projected/8d65d512-7c64-462f-b40d-ad0252a88233-kube-api-access-bl6xq\") pod \"8d65d512-7c64-462f-b40d-ad0252a88233\" (UID: \"8d65d512-7c64-462f-b40d-ad0252a88233\") " Jan 21 00:14:43 crc kubenswrapper[5118]: I0121 00:14:43.351466 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d18361c6-e5b6-44f0-b6d4-4dae1ff8c741-utilities\") pod \"d18361c6-e5b6-44f0-b6d4-4dae1ff8c741\" (UID: \"d18361c6-e5b6-44f0-b6d4-4dae1ff8c741\") " Jan 21 00:14:43 crc kubenswrapper[5118]: I0121 00:14:43.351517 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67f8120d-af6d-4e77-9772-0fc55dfad0bf-catalog-content\") pod \"67f8120d-af6d-4e77-9772-0fc55dfad0bf\" (UID: \"67f8120d-af6d-4e77-9772-0fc55dfad0bf\") " Jan 21 00:14:43 crc kubenswrapper[5118]: I0121 00:14:43.351780 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a651b3d8-5a1f-4da5-8d11-e6e3b1ef5d00-tmp" (OuterVolumeSpecName: "tmp") pod "a651b3d8-5a1f-4da5-8d11-e6e3b1ef5d00" (UID: "a651b3d8-5a1f-4da5-8d11-e6e3b1ef5d00"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 00:14:43 crc kubenswrapper[5118]: I0121 00:14:43.353106 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67f8120d-af6d-4e77-9772-0fc55dfad0bf-utilities" (OuterVolumeSpecName: "utilities") pod "67f8120d-af6d-4e77-9772-0fc55dfad0bf" (UID: "67f8120d-af6d-4e77-9772-0fc55dfad0bf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 00:14:43 crc kubenswrapper[5118]: I0121 00:14:43.354310 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d65d512-7c64-462f-b40d-ad0252a88233-utilities" (OuterVolumeSpecName: "utilities") pod "8d65d512-7c64-462f-b40d-ad0252a88233" (UID: "8d65d512-7c64-462f-b40d-ad0252a88233"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 00:14:43 crc kubenswrapper[5118]: I0121 00:14:43.357545 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67f8120d-af6d-4e77-9772-0fc55dfad0bf-kube-api-access-v879m" (OuterVolumeSpecName: "kube-api-access-v879m") pod "67f8120d-af6d-4e77-9772-0fc55dfad0bf" (UID: "67f8120d-af6d-4e77-9772-0fc55dfad0bf"). InnerVolumeSpecName "kube-api-access-v879m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:14:43 crc kubenswrapper[5118]: I0121 00:14:43.359941 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d18361c6-e5b6-44f0-b6d4-4dae1ff8c741-kube-api-access-877zw" (OuterVolumeSpecName: "kube-api-access-877zw") pod "d18361c6-e5b6-44f0-b6d4-4dae1ff8c741" (UID: "d18361c6-e5b6-44f0-b6d4-4dae1ff8c741"). InnerVolumeSpecName "kube-api-access-877zw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:14:43 crc kubenswrapper[5118]: I0121 00:14:43.362026 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a651b3d8-5a1f-4da5-8d11-e6e3b1ef5d00-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "a651b3d8-5a1f-4da5-8d11-e6e3b1ef5d00" (UID: "a651b3d8-5a1f-4da5-8d11-e6e3b1ef5d00"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 00:14:43 crc kubenswrapper[5118]: I0121 00:14:43.362344 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d18361c6-e5b6-44f0-b6d4-4dae1ff8c741-utilities" (OuterVolumeSpecName: "utilities") pod "d18361c6-e5b6-44f0-b6d4-4dae1ff8c741" (UID: "d18361c6-e5b6-44f0-b6d4-4dae1ff8c741"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 00:14:43 crc kubenswrapper[5118]: I0121 00:14:43.365546 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a651b3d8-5a1f-4da5-8d11-e6e3b1ef5d00-kube-api-access-xg6xt" (OuterVolumeSpecName: "kube-api-access-xg6xt") pod "a651b3d8-5a1f-4da5-8d11-e6e3b1ef5d00" (UID: "a651b3d8-5a1f-4da5-8d11-e6e3b1ef5d00"). InnerVolumeSpecName "kube-api-access-xg6xt". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:14:43 crc kubenswrapper[5118]: I0121 00:14:43.369053 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d65d512-7c64-462f-b40d-ad0252a88233-kube-api-access-bl6xq" (OuterVolumeSpecName: "kube-api-access-bl6xq") pod "8d65d512-7c64-462f-b40d-ad0252a88233" (UID: "8d65d512-7c64-462f-b40d-ad0252a88233"). InnerVolumeSpecName "kube-api-access-bl6xq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:14:43 crc kubenswrapper[5118]: I0121 00:14:43.370881 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d18361c6-e5b6-44f0-b6d4-4dae1ff8c741-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d18361c6-e5b6-44f0-b6d4-4dae1ff8c741" (UID: "d18361c6-e5b6-44f0-b6d4-4dae1ff8c741"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 00:14:43 crc kubenswrapper[5118]: I0121 00:14:43.373969 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a651b3d8-5a1f-4da5-8d11-e6e3b1ef5d00-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "a651b3d8-5a1f-4da5-8d11-e6e3b1ef5d00" (UID: "a651b3d8-5a1f-4da5-8d11-e6e3b1ef5d00"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 00:14:43 crc kubenswrapper[5118]: I0121 00:14:43.374205 5118 scope.go:117] "RemoveContainer" containerID="71f5cae005056bcf6ab18ef5f90f6f3cfd163f83065921bc0194f42eb2ee2571" Jan 21 00:14:43 crc kubenswrapper[5118]: E0121 00:14:43.375500 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71f5cae005056bcf6ab18ef5f90f6f3cfd163f83065921bc0194f42eb2ee2571\": container with ID starting with 71f5cae005056bcf6ab18ef5f90f6f3cfd163f83065921bc0194f42eb2ee2571 not found: ID does not exist" containerID="71f5cae005056bcf6ab18ef5f90f6f3cfd163f83065921bc0194f42eb2ee2571" Jan 21 00:14:43 crc kubenswrapper[5118]: I0121 00:14:43.375543 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71f5cae005056bcf6ab18ef5f90f6f3cfd163f83065921bc0194f42eb2ee2571"} err="failed to get container status \"71f5cae005056bcf6ab18ef5f90f6f3cfd163f83065921bc0194f42eb2ee2571\": rpc error: code = NotFound desc = could not find container \"71f5cae005056bcf6ab18ef5f90f6f3cfd163f83065921bc0194f42eb2ee2571\": container with ID starting with 71f5cae005056bcf6ab18ef5f90f6f3cfd163f83065921bc0194f42eb2ee2571 not found: ID does not exist" Jan 21 00:14:43 crc kubenswrapper[5118]: I0121 00:14:43.375570 5118 scope.go:117] "RemoveContainer" containerID="1bf31aadce08469a5109a87ba9ef45d3ddc653963f95982cc4574c34fdb52d36" Jan 21 00:14:43 crc kubenswrapper[5118]: E0121 00:14:43.375805 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1bf31aadce08469a5109a87ba9ef45d3ddc653963f95982cc4574c34fdb52d36\": container with ID starting with 1bf31aadce08469a5109a87ba9ef45d3ddc653963f95982cc4574c34fdb52d36 not found: ID does not exist" containerID="1bf31aadce08469a5109a87ba9ef45d3ddc653963f95982cc4574c34fdb52d36" Jan 21 00:14:43 crc kubenswrapper[5118]: I0121 00:14:43.375829 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1bf31aadce08469a5109a87ba9ef45d3ddc653963f95982cc4574c34fdb52d36"} err="failed to get container status \"1bf31aadce08469a5109a87ba9ef45d3ddc653963f95982cc4574c34fdb52d36\": rpc error: code = NotFound desc = could not find container \"1bf31aadce08469a5109a87ba9ef45d3ddc653963f95982cc4574c34fdb52d36\": container with ID starting with 1bf31aadce08469a5109a87ba9ef45d3ddc653963f95982cc4574c34fdb52d36 not found: ID does not exist" Jan 21 00:14:43 crc kubenswrapper[5118]: I0121 00:14:43.375840 5118 scope.go:117] "RemoveContainer" containerID="b9875a0b11f299a2af24e66dff921ffd960c91393c8aa5db6efb7824925fb409" Jan 21 00:14:43 crc kubenswrapper[5118]: E0121 00:14:43.376033 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9875a0b11f299a2af24e66dff921ffd960c91393c8aa5db6efb7824925fb409\": container with ID starting with b9875a0b11f299a2af24e66dff921ffd960c91393c8aa5db6efb7824925fb409 not found: ID does not exist" containerID="b9875a0b11f299a2af24e66dff921ffd960c91393c8aa5db6efb7824925fb409" Jan 21 00:14:43 crc kubenswrapper[5118]: I0121 00:14:43.376053 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9875a0b11f299a2af24e66dff921ffd960c91393c8aa5db6efb7824925fb409"} err="failed to get container status \"b9875a0b11f299a2af24e66dff921ffd960c91393c8aa5db6efb7824925fb409\": rpc error: code = NotFound desc = could not find container \"b9875a0b11f299a2af24e66dff921ffd960c91393c8aa5db6efb7824925fb409\": container with ID starting with b9875a0b11f299a2af24e66dff921ffd960c91393c8aa5db6efb7824925fb409 not found: ID does not exist" Jan 21 00:14:43 crc kubenswrapper[5118]: I0121 00:14:43.378507 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6c5wr"] Jan 21 00:14:43 crc kubenswrapper[5118]: I0121 00:14:43.397442 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6c5wr"] Jan 21 00:14:43 crc kubenswrapper[5118]: I0121 00:14:43.414219 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67f8120d-af6d-4e77-9772-0fc55dfad0bf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "67f8120d-af6d-4e77-9772-0fc55dfad0bf" (UID: "67f8120d-af6d-4e77-9772-0fc55dfad0bf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 00:14:43 crc kubenswrapper[5118]: I0121 00:14:43.452910 5118 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67f8120d-af6d-4e77-9772-0fc55dfad0bf-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 00:14:43 crc kubenswrapper[5118]: I0121 00:14:43.452946 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-v879m\" (UniqueName: \"kubernetes.io/projected/67f8120d-af6d-4e77-9772-0fc55dfad0bf-kube-api-access-v879m\") on node \"crc\" DevicePath \"\"" Jan 21 00:14:43 crc kubenswrapper[5118]: I0121 00:14:43.452956 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-877zw\" (UniqueName: \"kubernetes.io/projected/d18361c6-e5b6-44f0-b6d4-4dae1ff8c741-kube-api-access-877zw\") on node \"crc\" DevicePath \"\"" Jan 21 00:14:43 crc kubenswrapper[5118]: I0121 00:14:43.452965 5118 reconciler_common.go:299] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a651b3d8-5a1f-4da5-8d11-e6e3b1ef5d00-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 21 00:14:43 crc kubenswrapper[5118]: I0121 00:14:43.452975 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xg6xt\" (UniqueName: \"kubernetes.io/projected/a651b3d8-5a1f-4da5-8d11-e6e3b1ef5d00-kube-api-access-xg6xt\") on node \"crc\" DevicePath \"\"" Jan 21 00:14:43 crc kubenswrapper[5118]: I0121 00:14:43.452983 5118 reconciler_common.go:299] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a651b3d8-5a1f-4da5-8d11-e6e3b1ef5d00-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 00:14:43 crc kubenswrapper[5118]: I0121 00:14:43.452993 5118 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d18361c6-e5b6-44f0-b6d4-4dae1ff8c741-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 00:14:43 crc kubenswrapper[5118]: I0121 00:14:43.453002 5118 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d65d512-7c64-462f-b40d-ad0252a88233-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 00:14:43 crc kubenswrapper[5118]: I0121 00:14:43.453009 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bl6xq\" (UniqueName: \"kubernetes.io/projected/8d65d512-7c64-462f-b40d-ad0252a88233-kube-api-access-bl6xq\") on node \"crc\" DevicePath \"\"" Jan 21 00:14:43 crc kubenswrapper[5118]: I0121 00:14:43.453017 5118 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d18361c6-e5b6-44f0-b6d4-4dae1ff8c741-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 00:14:43 crc kubenswrapper[5118]: I0121 00:14:43.453024 5118 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67f8120d-af6d-4e77-9772-0fc55dfad0bf-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 00:14:43 crc kubenswrapper[5118]: I0121 00:14:43.453031 5118 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a651b3d8-5a1f-4da5-8d11-e6e3b1ef5d00-tmp\") on node \"crc\" DevicePath \"\"" Jan 21 00:14:43 crc kubenswrapper[5118]: I0121 00:14:43.475806 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d65d512-7c64-462f-b40d-ad0252a88233-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8d65d512-7c64-462f-b40d-ad0252a88233" (UID: "8d65d512-7c64-462f-b40d-ad0252a88233"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 00:14:43 crc kubenswrapper[5118]: I0121 00:14:43.536701 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-4wgxd"] Jan 21 00:14:43 crc kubenswrapper[5118]: I0121 00:14:43.554303 5118 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d65d512-7c64-462f-b40d-ad0252a88233-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 00:14:44 crc kubenswrapper[5118]: I0121 00:14:44.271726 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-4wgxd" event={"ID":"602c053c-5e99-4f10-888b-0ea7a740a476","Type":"ContainerStarted","Data":"f9e68d35fc7e7996d538bce4a12dae9f6b4060e1d340ced84b75ff0bd6af322e"} Jan 21 00:14:44 crc kubenswrapper[5118]: I0121 00:14:44.272196 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-4wgxd" Jan 21 00:14:44 crc kubenswrapper[5118]: I0121 00:14:44.272221 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-4wgxd" event={"ID":"602c053c-5e99-4f10-888b-0ea7a740a476","Type":"ContainerStarted","Data":"20e93b1b5a898b342505135408fadacc85a27e3d02efdcfac89ae29d0992ce26"} Jan 21 00:14:44 crc kubenswrapper[5118]: I0121 00:14:44.274079 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4frhj" Jan 21 00:14:44 crc kubenswrapper[5118]: I0121 00:14:44.274077 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4frhj" event={"ID":"d18361c6-e5b6-44f0-b6d4-4dae1ff8c741","Type":"ContainerDied","Data":"4bda1af60e5cd2d9bf46add4c880292139ed48e42f293396b9906a2983c2f8f7"} Jan 21 00:14:44 crc kubenswrapper[5118]: I0121 00:14:44.274207 5118 scope.go:117] "RemoveContainer" containerID="b612f5c41ec000be28e4c9c9896ae76218539305be25a1fd8cd030aa10e11d17" Jan 21 00:14:44 crc kubenswrapper[5118]: I0121 00:14:44.275729 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-4wgxd" Jan 21 00:14:44 crc kubenswrapper[5118]: I0121 00:14:44.277099 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-5r9pr" event={"ID":"a651b3d8-5a1f-4da5-8d11-e6e3b1ef5d00","Type":"ContainerDied","Data":"fb4021bffabe881856ddef4066b589339185f407ed9fb12652b83b4cbb0717c3"} Jan 21 00:14:44 crc kubenswrapper[5118]: I0121 00:14:44.277241 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-5r9pr" Jan 21 00:14:44 crc kubenswrapper[5118]: I0121 00:14:44.281986 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-94zfv" Jan 21 00:14:44 crc kubenswrapper[5118]: I0121 00:14:44.282014 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-94zfv" event={"ID":"8d65d512-7c64-462f-b40d-ad0252a88233","Type":"ContainerDied","Data":"1eda30b2e64eed866377c347865801f347beff47b57e88957bab7bcecde38551"} Jan 21 00:14:44 crc kubenswrapper[5118]: I0121 00:14:44.286538 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s5pql" event={"ID":"67f8120d-af6d-4e77-9772-0fc55dfad0bf","Type":"ContainerDied","Data":"8f1144cce7ab9669ef3ff6f172160d821f545245f19e091238291f067a90b7d1"} Jan 21 00:14:44 crc kubenswrapper[5118]: I0121 00:14:44.286681 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s5pql" Jan 21 00:14:44 crc kubenswrapper[5118]: I0121 00:14:44.295456 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-547dbd544d-4wgxd" podStartSLOduration=2.295431699 podStartE2EDuration="2.295431699s" podCreationTimestamp="2026-01-21 00:14:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:14:44.287701937 +0000 UTC m=+339.611948985" watchObservedRunningTime="2026-01-21 00:14:44.295431699 +0000 UTC m=+339.619678727" Jan 21 00:14:44 crc kubenswrapper[5118]: I0121 00:14:44.306148 5118 scope.go:117] "RemoveContainer" containerID="8fdc2b2cd5aeb98eeccb724ed45fa038952a7530fb2b905459918a2949fc02b3" Jan 21 00:14:44 crc kubenswrapper[5118]: I0121 00:14:44.349182 5118 scope.go:117] "RemoveContainer" containerID="cebfc3ba7fc479b564338f122863b8ca7c1a43361cff35d80d5fb74f334357fb" Jan 21 00:14:44 crc kubenswrapper[5118]: I0121 00:14:44.355438 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4frhj"] Jan 21 00:14:44 crc kubenswrapper[5118]: I0121 00:14:44.366840 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4frhj"] Jan 21 00:14:44 crc kubenswrapper[5118]: I0121 00:14:44.371385 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-94zfv"] Jan 21 00:14:44 crc kubenswrapper[5118]: I0121 00:14:44.375284 5118 scope.go:117] "RemoveContainer" containerID="7fc80a9d862859bc887a15b3aa15cd37e9cfe4c7c11072a59fada4a5f9114766" Jan 21 00:14:44 crc kubenswrapper[5118]: I0121 00:14:44.376540 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-94zfv"] Jan 21 00:14:44 crc kubenswrapper[5118]: I0121 00:14:44.386023 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-5r9pr"] Jan 21 00:14:44 crc kubenswrapper[5118]: I0121 00:14:44.386074 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-5r9pr"] Jan 21 00:14:44 crc kubenswrapper[5118]: I0121 00:14:44.386092 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-s5pql"] Jan 21 00:14:44 crc kubenswrapper[5118]: I0121 00:14:44.389145 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-s5pql"] Jan 21 00:14:44 crc kubenswrapper[5118]: I0121 00:14:44.397463 5118 scope.go:117] "RemoveContainer" containerID="532226610bc3672e54c1a84b2f208a59250e8c5d82ca78ebcde74863bae441d3" Jan 21 00:14:44 crc kubenswrapper[5118]: I0121 00:14:44.409887 5118 scope.go:117] "RemoveContainer" containerID="064e7880e54045b59932d7093dd755638be2dbfbe989d221424a43ec505c91e1" Jan 21 00:14:44 crc kubenswrapper[5118]: I0121 00:14:44.423774 5118 scope.go:117] "RemoveContainer" containerID="ebcc1f80caeef4fa7c41b04e8b8c47f377a3c8d8d10e632e53e1df4b223a2a4f" Jan 21 00:14:44 crc kubenswrapper[5118]: I0121 00:14:44.435956 5118 scope.go:117] "RemoveContainer" containerID="00fe3c213943c94d49f9c07ebca5629df4b6ccb2fab0e8cfe73f67a6fda85a09" Jan 21 00:14:44 crc kubenswrapper[5118]: I0121 00:14:44.447950 5118 scope.go:117] "RemoveContainer" containerID="1a0da28383c155e574efb1373c98b8cd787809b3ef0e2e9d3ff83f06684bb4fb" Jan 21 00:14:44 crc kubenswrapper[5118]: I0121 00:14:44.465376 5118 scope.go:117] "RemoveContainer" containerID="59f7aca7e9712f680c32c494ec92f6b88f1d57b3452239bdd98b0f81ea100e16" Jan 21 00:14:44 crc kubenswrapper[5118]: I0121 00:14:44.869560 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-gptdw"] Jan 21 00:14:44 crc kubenswrapper[5118]: I0121 00:14:44.872012 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d18361c6-e5b6-44f0-b6d4-4dae1ff8c741" containerName="registry-server" Jan 21 00:14:44 crc kubenswrapper[5118]: I0121 00:14:44.872059 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="d18361c6-e5b6-44f0-b6d4-4dae1ff8c741" containerName="registry-server" Jan 21 00:14:44 crc kubenswrapper[5118]: I0121 00:14:44.872087 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8d65d512-7c64-462f-b40d-ad0252a88233" containerName="extract-content" Jan 21 00:14:44 crc kubenswrapper[5118]: I0121 00:14:44.872104 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d65d512-7c64-462f-b40d-ad0252a88233" containerName="extract-content" Jan 21 00:14:44 crc kubenswrapper[5118]: I0121 00:14:44.872129 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8d65d512-7c64-462f-b40d-ad0252a88233" containerName="extract-utilities" Jan 21 00:14:44 crc kubenswrapper[5118]: I0121 00:14:44.872149 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d65d512-7c64-462f-b40d-ad0252a88233" containerName="extract-utilities" Jan 21 00:14:44 crc kubenswrapper[5118]: I0121 00:14:44.872245 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="28172373-ad9f-4755-a060-b467a2817a67" containerName="registry-server" Jan 21 00:14:44 crc kubenswrapper[5118]: I0121 00:14:44.872263 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="28172373-ad9f-4755-a060-b467a2817a67" containerName="registry-server" Jan 21 00:14:44 crc kubenswrapper[5118]: I0121 00:14:44.872291 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="28172373-ad9f-4755-a060-b467a2817a67" containerName="extract-utilities" Jan 21 00:14:44 crc kubenswrapper[5118]: I0121 00:14:44.872307 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="28172373-ad9f-4755-a060-b467a2817a67" containerName="extract-utilities" Jan 21 00:14:44 crc kubenswrapper[5118]: I0121 00:14:44.872333 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="67f8120d-af6d-4e77-9772-0fc55dfad0bf" containerName="extract-utilities" Jan 21 00:14:44 crc kubenswrapper[5118]: I0121 00:14:44.872346 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="67f8120d-af6d-4e77-9772-0fc55dfad0bf" containerName="extract-utilities" Jan 21 00:14:44 crc kubenswrapper[5118]: I0121 00:14:44.872366 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8d65d512-7c64-462f-b40d-ad0252a88233" containerName="registry-server" Jan 21 00:14:44 crc kubenswrapper[5118]: I0121 00:14:44.872379 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d65d512-7c64-462f-b40d-ad0252a88233" containerName="registry-server" Jan 21 00:14:44 crc kubenswrapper[5118]: I0121 00:14:44.872402 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="67f8120d-af6d-4e77-9772-0fc55dfad0bf" containerName="registry-server" Jan 21 00:14:44 crc kubenswrapper[5118]: I0121 00:14:44.872414 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="67f8120d-af6d-4e77-9772-0fc55dfad0bf" containerName="registry-server" Jan 21 00:14:44 crc kubenswrapper[5118]: I0121 00:14:44.872435 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a651b3d8-5a1f-4da5-8d11-e6e3b1ef5d00" containerName="marketplace-operator" Jan 21 00:14:44 crc kubenswrapper[5118]: I0121 00:14:44.872446 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="a651b3d8-5a1f-4da5-8d11-e6e3b1ef5d00" containerName="marketplace-operator" Jan 21 00:14:44 crc kubenswrapper[5118]: I0121 00:14:44.872468 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a651b3d8-5a1f-4da5-8d11-e6e3b1ef5d00" containerName="marketplace-operator" Jan 21 00:14:44 crc kubenswrapper[5118]: I0121 00:14:44.872480 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="a651b3d8-5a1f-4da5-8d11-e6e3b1ef5d00" containerName="marketplace-operator" Jan 21 00:14:44 crc kubenswrapper[5118]: I0121 00:14:44.872496 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="67f8120d-af6d-4e77-9772-0fc55dfad0bf" containerName="extract-content" Jan 21 00:14:44 crc kubenswrapper[5118]: I0121 00:14:44.872507 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="67f8120d-af6d-4e77-9772-0fc55dfad0bf" containerName="extract-content" Jan 21 00:14:44 crc kubenswrapper[5118]: I0121 00:14:44.872524 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d18361c6-e5b6-44f0-b6d4-4dae1ff8c741" containerName="extract-utilities" Jan 21 00:14:44 crc kubenswrapper[5118]: I0121 00:14:44.872535 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="d18361c6-e5b6-44f0-b6d4-4dae1ff8c741" containerName="extract-utilities" Jan 21 00:14:44 crc kubenswrapper[5118]: I0121 00:14:44.872554 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="28172373-ad9f-4755-a060-b467a2817a67" containerName="extract-content" Jan 21 00:14:44 crc kubenswrapper[5118]: I0121 00:14:44.872564 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="28172373-ad9f-4755-a060-b467a2817a67" containerName="extract-content" Jan 21 00:14:44 crc kubenswrapper[5118]: I0121 00:14:44.872579 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d18361c6-e5b6-44f0-b6d4-4dae1ff8c741" containerName="extract-content" Jan 21 00:14:44 crc kubenswrapper[5118]: I0121 00:14:44.872592 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="d18361c6-e5b6-44f0-b6d4-4dae1ff8c741" containerName="extract-content" Jan 21 00:14:44 crc kubenswrapper[5118]: I0121 00:14:44.872742 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="28172373-ad9f-4755-a060-b467a2817a67" containerName="registry-server" Jan 21 00:14:44 crc kubenswrapper[5118]: I0121 00:14:44.872762 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="d18361c6-e5b6-44f0-b6d4-4dae1ff8c741" containerName="registry-server" Jan 21 00:14:44 crc kubenswrapper[5118]: I0121 00:14:44.872777 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="a651b3d8-5a1f-4da5-8d11-e6e3b1ef5d00" containerName="marketplace-operator" Jan 21 00:14:44 crc kubenswrapper[5118]: I0121 00:14:44.872795 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="8d65d512-7c64-462f-b40d-ad0252a88233" containerName="registry-server" Jan 21 00:14:44 crc kubenswrapper[5118]: I0121 00:14:44.872812 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="67f8120d-af6d-4e77-9772-0fc55dfad0bf" containerName="registry-server" Jan 21 00:14:44 crc kubenswrapper[5118]: I0121 00:14:44.872825 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="a651b3d8-5a1f-4da5-8d11-e6e3b1ef5d00" containerName="marketplace-operator" Jan 21 00:14:44 crc kubenswrapper[5118]: I0121 00:14:44.884325 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gptdw"] Jan 21 00:14:44 crc kubenswrapper[5118]: I0121 00:14:44.884510 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gptdw" Jan 21 00:14:44 crc kubenswrapper[5118]: I0121 00:14:44.886606 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Jan 21 00:14:44 crc kubenswrapper[5118]: I0121 00:14:44.971094 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7vvx\" (UniqueName: \"kubernetes.io/projected/c35d8860-c9f7-468d-9832-45b92d9d6e1c-kube-api-access-n7vvx\") pod \"certified-operators-gptdw\" (UID: \"c35d8860-c9f7-468d-9832-45b92d9d6e1c\") " pod="openshift-marketplace/certified-operators-gptdw" Jan 21 00:14:44 crc kubenswrapper[5118]: I0121 00:14:44.973624 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c35d8860-c9f7-468d-9832-45b92d9d6e1c-utilities\") pod \"certified-operators-gptdw\" (UID: \"c35d8860-c9f7-468d-9832-45b92d9d6e1c\") " pod="openshift-marketplace/certified-operators-gptdw" Jan 21 00:14:44 crc kubenswrapper[5118]: I0121 00:14:44.975814 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c35d8860-c9f7-468d-9832-45b92d9d6e1c-catalog-content\") pod \"certified-operators-gptdw\" (UID: \"c35d8860-c9f7-468d-9832-45b92d9d6e1c\") " pod="openshift-marketplace/certified-operators-gptdw" Jan 21 00:14:44 crc kubenswrapper[5118]: I0121 00:14:44.983195 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28172373-ad9f-4755-a060-b467a2817a67" path="/var/lib/kubelet/pods/28172373-ad9f-4755-a060-b467a2817a67/volumes" Jan 21 00:14:44 crc kubenswrapper[5118]: I0121 00:14:44.983969 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67f8120d-af6d-4e77-9772-0fc55dfad0bf" path="/var/lib/kubelet/pods/67f8120d-af6d-4e77-9772-0fc55dfad0bf/volumes" Jan 21 00:14:44 crc kubenswrapper[5118]: I0121 00:14:44.984584 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d65d512-7c64-462f-b40d-ad0252a88233" path="/var/lib/kubelet/pods/8d65d512-7c64-462f-b40d-ad0252a88233/volumes" Jan 21 00:14:44 crc kubenswrapper[5118]: I0121 00:14:44.985613 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a651b3d8-5a1f-4da5-8d11-e6e3b1ef5d00" path="/var/lib/kubelet/pods/a651b3d8-5a1f-4da5-8d11-e6e3b1ef5d00/volumes" Jan 21 00:14:44 crc kubenswrapper[5118]: I0121 00:14:44.986036 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d18361c6-e5b6-44f0-b6d4-4dae1ff8c741" path="/var/lib/kubelet/pods/d18361c6-e5b6-44f0-b6d4-4dae1ff8c741/volumes" Jan 21 00:14:45 crc kubenswrapper[5118]: I0121 00:14:45.064069 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-45trt"] Jan 21 00:14:45 crc kubenswrapper[5118]: I0121 00:14:45.072552 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-45trt" Jan 21 00:14:45 crc kubenswrapper[5118]: I0121 00:14:45.076738 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-n7vvx\" (UniqueName: \"kubernetes.io/projected/c35d8860-c9f7-468d-9832-45b92d9d6e1c-kube-api-access-n7vvx\") pod \"certified-operators-gptdw\" (UID: \"c35d8860-c9f7-468d-9832-45b92d9d6e1c\") " pod="openshift-marketplace/certified-operators-gptdw" Jan 21 00:14:45 crc kubenswrapper[5118]: I0121 00:14:45.076786 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c35d8860-c9f7-468d-9832-45b92d9d6e1c-utilities\") pod \"certified-operators-gptdw\" (UID: \"c35d8860-c9f7-468d-9832-45b92d9d6e1c\") " pod="openshift-marketplace/certified-operators-gptdw" Jan 21 00:14:45 crc kubenswrapper[5118]: I0121 00:14:45.076818 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c35d8860-c9f7-468d-9832-45b92d9d6e1c-catalog-content\") pod \"certified-operators-gptdw\" (UID: \"c35d8860-c9f7-468d-9832-45b92d9d6e1c\") " pod="openshift-marketplace/certified-operators-gptdw" Jan 21 00:14:45 crc kubenswrapper[5118]: I0121 00:14:45.076972 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-45trt"] Jan 21 00:14:45 crc kubenswrapper[5118]: I0121 00:14:45.077225 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c35d8860-c9f7-468d-9832-45b92d9d6e1c-catalog-content\") pod \"certified-operators-gptdw\" (UID: \"c35d8860-c9f7-468d-9832-45b92d9d6e1c\") " pod="openshift-marketplace/certified-operators-gptdw" Jan 21 00:14:45 crc kubenswrapper[5118]: I0121 00:14:45.077340 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c35d8860-c9f7-468d-9832-45b92d9d6e1c-utilities\") pod \"certified-operators-gptdw\" (UID: \"c35d8860-c9f7-468d-9832-45b92d9d6e1c\") " pod="openshift-marketplace/certified-operators-gptdw" Jan 21 00:14:45 crc kubenswrapper[5118]: I0121 00:14:45.080287 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Jan 21 00:14:45 crc kubenswrapper[5118]: I0121 00:14:45.104749 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7vvx\" (UniqueName: \"kubernetes.io/projected/c35d8860-c9f7-468d-9832-45b92d9d6e1c-kube-api-access-n7vvx\") pod \"certified-operators-gptdw\" (UID: \"c35d8860-c9f7-468d-9832-45b92d9d6e1c\") " pod="openshift-marketplace/certified-operators-gptdw" Jan 21 00:14:45 crc kubenswrapper[5118]: I0121 00:14:45.177535 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/daccdf66-85c0-49b6-a857-638d2b782a9a-catalog-content\") pod \"community-operators-45trt\" (UID: \"daccdf66-85c0-49b6-a857-638d2b782a9a\") " pod="openshift-marketplace/community-operators-45trt" Jan 21 00:14:45 crc kubenswrapper[5118]: I0121 00:14:45.177600 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tbtc\" (UniqueName: \"kubernetes.io/projected/daccdf66-85c0-49b6-a857-638d2b782a9a-kube-api-access-5tbtc\") pod \"community-operators-45trt\" (UID: \"daccdf66-85c0-49b6-a857-638d2b782a9a\") " pod="openshift-marketplace/community-operators-45trt" Jan 21 00:14:45 crc kubenswrapper[5118]: I0121 00:14:45.177620 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/daccdf66-85c0-49b6-a857-638d2b782a9a-utilities\") pod \"community-operators-45trt\" (UID: \"daccdf66-85c0-49b6-a857-638d2b782a9a\") " pod="openshift-marketplace/community-operators-45trt" Jan 21 00:14:45 crc kubenswrapper[5118]: I0121 00:14:45.211715 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gptdw" Jan 21 00:14:45 crc kubenswrapper[5118]: I0121 00:14:45.279253 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5tbtc\" (UniqueName: \"kubernetes.io/projected/daccdf66-85c0-49b6-a857-638d2b782a9a-kube-api-access-5tbtc\") pod \"community-operators-45trt\" (UID: \"daccdf66-85c0-49b6-a857-638d2b782a9a\") " pod="openshift-marketplace/community-operators-45trt" Jan 21 00:14:45 crc kubenswrapper[5118]: I0121 00:14:45.279307 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/daccdf66-85c0-49b6-a857-638d2b782a9a-utilities\") pod \"community-operators-45trt\" (UID: \"daccdf66-85c0-49b6-a857-638d2b782a9a\") " pod="openshift-marketplace/community-operators-45trt" Jan 21 00:14:45 crc kubenswrapper[5118]: I0121 00:14:45.279403 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/daccdf66-85c0-49b6-a857-638d2b782a9a-catalog-content\") pod \"community-operators-45trt\" (UID: \"daccdf66-85c0-49b6-a857-638d2b782a9a\") " pod="openshift-marketplace/community-operators-45trt" Jan 21 00:14:45 crc kubenswrapper[5118]: I0121 00:14:45.279968 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/daccdf66-85c0-49b6-a857-638d2b782a9a-catalog-content\") pod \"community-operators-45trt\" (UID: \"daccdf66-85c0-49b6-a857-638d2b782a9a\") " pod="openshift-marketplace/community-operators-45trt" Jan 21 00:14:45 crc kubenswrapper[5118]: I0121 00:14:45.280911 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/daccdf66-85c0-49b6-a857-638d2b782a9a-utilities\") pod \"community-operators-45trt\" (UID: \"daccdf66-85c0-49b6-a857-638d2b782a9a\") " pod="openshift-marketplace/community-operators-45trt" Jan 21 00:14:45 crc kubenswrapper[5118]: I0121 00:14:45.300525 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tbtc\" (UniqueName: \"kubernetes.io/projected/daccdf66-85c0-49b6-a857-638d2b782a9a-kube-api-access-5tbtc\") pod \"community-operators-45trt\" (UID: \"daccdf66-85c0-49b6-a857-638d2b782a9a\") " pod="openshift-marketplace/community-operators-45trt" Jan 21 00:14:45 crc kubenswrapper[5118]: I0121 00:14:45.396590 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-45trt" Jan 21 00:14:45 crc kubenswrapper[5118]: I0121 00:14:45.603943 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gptdw"] Jan 21 00:14:45 crc kubenswrapper[5118]: W0121 00:14:45.604215 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc35d8860_c9f7_468d_9832_45b92d9d6e1c.slice/crio-bc7289becb6e84454ecfc1e8e6797d8b1a5f26ae23e6573f58544778c41c243d WatchSource:0}: Error finding container bc7289becb6e84454ecfc1e8e6797d8b1a5f26ae23e6573f58544778c41c243d: Status 404 returned error can't find the container with id bc7289becb6e84454ecfc1e8e6797d8b1a5f26ae23e6573f58544778c41c243d Jan 21 00:14:45 crc kubenswrapper[5118]: I0121 00:14:45.765008 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-45trt"] Jan 21 00:14:45 crc kubenswrapper[5118]: W0121 00:14:45.768850 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddaccdf66_85c0_49b6_a857_638d2b782a9a.slice/crio-0b68c777adbee2c25fe1bd9162f98a17ace22d9958b085a3d5a84739d86ab002 WatchSource:0}: Error finding container 0b68c777adbee2c25fe1bd9162f98a17ace22d9958b085a3d5a84739d86ab002: Status 404 returned error can't find the container with id 0b68c777adbee2c25fe1bd9162f98a17ace22d9958b085a3d5a84739d86ab002 Jan 21 00:14:46 crc kubenswrapper[5118]: I0121 00:14:46.311697 5118 generic.go:358] "Generic (PLEG): container finished" podID="daccdf66-85c0-49b6-a857-638d2b782a9a" containerID="f959f28ff5837bca8313c1608ea5c116e0c7026017cec37f043e93e078e4593b" exitCode=0 Jan 21 00:14:46 crc kubenswrapper[5118]: I0121 00:14:46.311854 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-45trt" event={"ID":"daccdf66-85c0-49b6-a857-638d2b782a9a","Type":"ContainerDied","Data":"f959f28ff5837bca8313c1608ea5c116e0c7026017cec37f043e93e078e4593b"} Jan 21 00:14:46 crc kubenswrapper[5118]: I0121 00:14:46.311880 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-45trt" event={"ID":"daccdf66-85c0-49b6-a857-638d2b782a9a","Type":"ContainerStarted","Data":"0b68c777adbee2c25fe1bd9162f98a17ace22d9958b085a3d5a84739d86ab002"} Jan 21 00:14:46 crc kubenswrapper[5118]: I0121 00:14:46.313797 5118 generic.go:358] "Generic (PLEG): container finished" podID="c35d8860-c9f7-468d-9832-45b92d9d6e1c" containerID="8cae8ad8fea1388909ea35d660ed9951977b562298aeb093c651179579d0ae8c" exitCode=0 Jan 21 00:14:46 crc kubenswrapper[5118]: I0121 00:14:46.313861 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gptdw" event={"ID":"c35d8860-c9f7-468d-9832-45b92d9d6e1c","Type":"ContainerDied","Data":"8cae8ad8fea1388909ea35d660ed9951977b562298aeb093c651179579d0ae8c"} Jan 21 00:14:46 crc kubenswrapper[5118]: I0121 00:14:46.313892 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gptdw" event={"ID":"c35d8860-c9f7-468d-9832-45b92d9d6e1c","Type":"ContainerStarted","Data":"bc7289becb6e84454ecfc1e8e6797d8b1a5f26ae23e6573f58544778c41c243d"} Jan 21 00:14:47 crc kubenswrapper[5118]: I0121 00:14:47.319282 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gptdw" event={"ID":"c35d8860-c9f7-468d-9832-45b92d9d6e1c","Type":"ContainerStarted","Data":"16a3773c7e38a395b436159ddd8ed0ee9388cb60c202f6b39fc40699dfe4ddec"} Jan 21 00:14:47 crc kubenswrapper[5118]: I0121 00:14:47.323665 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-45trt" event={"ID":"daccdf66-85c0-49b6-a857-638d2b782a9a","Type":"ContainerStarted","Data":"37f60ddaa25ab2f5d73bb2b44a0ac6991f88417c1cb451f9ba53ba4ed8961dd1"} Jan 21 00:14:47 crc kubenswrapper[5118]: I0121 00:14:47.458663 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-sfj8c"] Jan 21 00:14:47 crc kubenswrapper[5118]: I0121 00:14:47.464609 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6wj6k"] Jan 21 00:14:47 crc kubenswrapper[5118]: I0121 00:14:47.464759 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5d9d95bf5b-sfj8c" Jan 21 00:14:47 crc kubenswrapper[5118]: I0121 00:14:47.467877 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6wj6k" Jan 21 00:14:47 crc kubenswrapper[5118]: I0121 00:14:47.469459 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Jan 21 00:14:47 crc kubenswrapper[5118]: I0121 00:14:47.481910 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6wj6k"] Jan 21 00:14:47 crc kubenswrapper[5118]: I0121 00:14:47.497887 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-sfj8c"] Jan 21 00:14:47 crc kubenswrapper[5118]: I0121 00:14:47.511907 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cczlt\" (UniqueName: \"kubernetes.io/projected/8a36e2d2-0658-478e-8105-459a04d0234b-kube-api-access-cczlt\") pod \"redhat-marketplace-6wj6k\" (UID: \"8a36e2d2-0658-478e-8105-459a04d0234b\") " pod="openshift-marketplace/redhat-marketplace-6wj6k" Jan 21 00:14:47 crc kubenswrapper[5118]: I0121 00:14:47.511954 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b4dcb64d-77a9-4bbe-bae1-690b453704cc-bound-sa-token\") pod \"image-registry-5d9d95bf5b-sfj8c\" (UID: \"b4dcb64d-77a9-4bbe-bae1-690b453704cc\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-sfj8c" Jan 21 00:14:47 crc kubenswrapper[5118]: I0121 00:14:47.511976 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78kff\" (UniqueName: \"kubernetes.io/projected/b4dcb64d-77a9-4bbe-bae1-690b453704cc-kube-api-access-78kff\") pod \"image-registry-5d9d95bf5b-sfj8c\" (UID: \"b4dcb64d-77a9-4bbe-bae1-690b453704cc\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-sfj8c" Jan 21 00:14:47 crc kubenswrapper[5118]: I0121 00:14:47.512004 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-5d9d95bf5b-sfj8c\" (UID: \"b4dcb64d-77a9-4bbe-bae1-690b453704cc\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-sfj8c" Jan 21 00:14:47 crc kubenswrapper[5118]: I0121 00:14:47.512047 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b4dcb64d-77a9-4bbe-bae1-690b453704cc-registry-tls\") pod \"image-registry-5d9d95bf5b-sfj8c\" (UID: \"b4dcb64d-77a9-4bbe-bae1-690b453704cc\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-sfj8c" Jan 21 00:14:47 crc kubenswrapper[5118]: I0121 00:14:47.512088 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a36e2d2-0658-478e-8105-459a04d0234b-utilities\") pod \"redhat-marketplace-6wj6k\" (UID: \"8a36e2d2-0658-478e-8105-459a04d0234b\") " pod="openshift-marketplace/redhat-marketplace-6wj6k" Jan 21 00:14:47 crc kubenswrapper[5118]: I0121 00:14:47.512139 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b4dcb64d-77a9-4bbe-bae1-690b453704cc-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-sfj8c\" (UID: \"b4dcb64d-77a9-4bbe-bae1-690b453704cc\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-sfj8c" Jan 21 00:14:47 crc kubenswrapper[5118]: I0121 00:14:47.512185 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a36e2d2-0658-478e-8105-459a04d0234b-catalog-content\") pod \"redhat-marketplace-6wj6k\" (UID: \"8a36e2d2-0658-478e-8105-459a04d0234b\") " pod="openshift-marketplace/redhat-marketplace-6wj6k" Jan 21 00:14:47 crc kubenswrapper[5118]: I0121 00:14:47.512199 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b4dcb64d-77a9-4bbe-bae1-690b453704cc-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-sfj8c\" (UID: \"b4dcb64d-77a9-4bbe-bae1-690b453704cc\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-sfj8c" Jan 21 00:14:47 crc kubenswrapper[5118]: I0121 00:14:47.512221 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b4dcb64d-77a9-4bbe-bae1-690b453704cc-registry-certificates\") pod \"image-registry-5d9d95bf5b-sfj8c\" (UID: \"b4dcb64d-77a9-4bbe-bae1-690b453704cc\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-sfj8c" Jan 21 00:14:47 crc kubenswrapper[5118]: I0121 00:14:47.512242 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b4dcb64d-77a9-4bbe-bae1-690b453704cc-trusted-ca\") pod \"image-registry-5d9d95bf5b-sfj8c\" (UID: \"b4dcb64d-77a9-4bbe-bae1-690b453704cc\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-sfj8c" Jan 21 00:14:47 crc kubenswrapper[5118]: I0121 00:14:47.605937 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-5d9d95bf5b-sfj8c\" (UID: \"b4dcb64d-77a9-4bbe-bae1-690b453704cc\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-sfj8c" Jan 21 00:14:47 crc kubenswrapper[5118]: I0121 00:14:47.612815 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cczlt\" (UniqueName: \"kubernetes.io/projected/8a36e2d2-0658-478e-8105-459a04d0234b-kube-api-access-cczlt\") pod \"redhat-marketplace-6wj6k\" (UID: \"8a36e2d2-0658-478e-8105-459a04d0234b\") " pod="openshift-marketplace/redhat-marketplace-6wj6k" Jan 21 00:14:47 crc kubenswrapper[5118]: I0121 00:14:47.612860 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b4dcb64d-77a9-4bbe-bae1-690b453704cc-bound-sa-token\") pod \"image-registry-5d9d95bf5b-sfj8c\" (UID: \"b4dcb64d-77a9-4bbe-bae1-690b453704cc\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-sfj8c" Jan 21 00:14:47 crc kubenswrapper[5118]: I0121 00:14:47.612878 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-78kff\" (UniqueName: \"kubernetes.io/projected/b4dcb64d-77a9-4bbe-bae1-690b453704cc-kube-api-access-78kff\") pod \"image-registry-5d9d95bf5b-sfj8c\" (UID: \"b4dcb64d-77a9-4bbe-bae1-690b453704cc\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-sfj8c" Jan 21 00:14:47 crc kubenswrapper[5118]: I0121 00:14:47.612942 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b4dcb64d-77a9-4bbe-bae1-690b453704cc-registry-tls\") pod \"image-registry-5d9d95bf5b-sfj8c\" (UID: \"b4dcb64d-77a9-4bbe-bae1-690b453704cc\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-sfj8c" Jan 21 00:14:47 crc kubenswrapper[5118]: I0121 00:14:47.612972 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a36e2d2-0658-478e-8105-459a04d0234b-utilities\") pod \"redhat-marketplace-6wj6k\" (UID: \"8a36e2d2-0658-478e-8105-459a04d0234b\") " pod="openshift-marketplace/redhat-marketplace-6wj6k" Jan 21 00:14:47 crc kubenswrapper[5118]: I0121 00:14:47.613008 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b4dcb64d-77a9-4bbe-bae1-690b453704cc-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-sfj8c\" (UID: \"b4dcb64d-77a9-4bbe-bae1-690b453704cc\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-sfj8c" Jan 21 00:14:47 crc kubenswrapper[5118]: I0121 00:14:47.613043 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a36e2d2-0658-478e-8105-459a04d0234b-catalog-content\") pod \"redhat-marketplace-6wj6k\" (UID: \"8a36e2d2-0658-478e-8105-459a04d0234b\") " pod="openshift-marketplace/redhat-marketplace-6wj6k" Jan 21 00:14:47 crc kubenswrapper[5118]: I0121 00:14:47.613063 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b4dcb64d-77a9-4bbe-bae1-690b453704cc-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-sfj8c\" (UID: \"b4dcb64d-77a9-4bbe-bae1-690b453704cc\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-sfj8c" Jan 21 00:14:47 crc kubenswrapper[5118]: I0121 00:14:47.613083 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b4dcb64d-77a9-4bbe-bae1-690b453704cc-registry-certificates\") pod \"image-registry-5d9d95bf5b-sfj8c\" (UID: \"b4dcb64d-77a9-4bbe-bae1-690b453704cc\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-sfj8c" Jan 21 00:14:47 crc kubenswrapper[5118]: I0121 00:14:47.613103 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b4dcb64d-77a9-4bbe-bae1-690b453704cc-trusted-ca\") pod \"image-registry-5d9d95bf5b-sfj8c\" (UID: \"b4dcb64d-77a9-4bbe-bae1-690b453704cc\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-sfj8c" Jan 21 00:14:47 crc kubenswrapper[5118]: I0121 00:14:47.613582 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a36e2d2-0658-478e-8105-459a04d0234b-utilities\") pod \"redhat-marketplace-6wj6k\" (UID: \"8a36e2d2-0658-478e-8105-459a04d0234b\") " pod="openshift-marketplace/redhat-marketplace-6wj6k" Jan 21 00:14:47 crc kubenswrapper[5118]: I0121 00:14:47.614404 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b4dcb64d-77a9-4bbe-bae1-690b453704cc-trusted-ca\") pod \"image-registry-5d9d95bf5b-sfj8c\" (UID: \"b4dcb64d-77a9-4bbe-bae1-690b453704cc\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-sfj8c" Jan 21 00:14:47 crc kubenswrapper[5118]: I0121 00:14:47.615519 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b4dcb64d-77a9-4bbe-bae1-690b453704cc-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-sfj8c\" (UID: \"b4dcb64d-77a9-4bbe-bae1-690b453704cc\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-sfj8c" Jan 21 00:14:47 crc kubenswrapper[5118]: I0121 00:14:47.615762 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a36e2d2-0658-478e-8105-459a04d0234b-catalog-content\") pod \"redhat-marketplace-6wj6k\" (UID: \"8a36e2d2-0658-478e-8105-459a04d0234b\") " pod="openshift-marketplace/redhat-marketplace-6wj6k" Jan 21 00:14:47 crc kubenswrapper[5118]: I0121 00:14:47.616731 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b4dcb64d-77a9-4bbe-bae1-690b453704cc-registry-certificates\") pod \"image-registry-5d9d95bf5b-sfj8c\" (UID: \"b4dcb64d-77a9-4bbe-bae1-690b453704cc\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-sfj8c" Jan 21 00:14:47 crc kubenswrapper[5118]: I0121 00:14:47.622121 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b4dcb64d-77a9-4bbe-bae1-690b453704cc-registry-tls\") pod \"image-registry-5d9d95bf5b-sfj8c\" (UID: \"b4dcb64d-77a9-4bbe-bae1-690b453704cc\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-sfj8c" Jan 21 00:14:47 crc kubenswrapper[5118]: I0121 00:14:47.626240 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b4dcb64d-77a9-4bbe-bae1-690b453704cc-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-sfj8c\" (UID: \"b4dcb64d-77a9-4bbe-bae1-690b453704cc\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-sfj8c" Jan 21 00:14:47 crc kubenswrapper[5118]: I0121 00:14:47.630926 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-78kff\" (UniqueName: \"kubernetes.io/projected/b4dcb64d-77a9-4bbe-bae1-690b453704cc-kube-api-access-78kff\") pod \"image-registry-5d9d95bf5b-sfj8c\" (UID: \"b4dcb64d-77a9-4bbe-bae1-690b453704cc\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-sfj8c" Jan 21 00:14:47 crc kubenswrapper[5118]: I0121 00:14:47.632376 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cczlt\" (UniqueName: \"kubernetes.io/projected/8a36e2d2-0658-478e-8105-459a04d0234b-kube-api-access-cczlt\") pod \"redhat-marketplace-6wj6k\" (UID: \"8a36e2d2-0658-478e-8105-459a04d0234b\") " pod="openshift-marketplace/redhat-marketplace-6wj6k" Jan 21 00:14:47 crc kubenswrapper[5118]: I0121 00:14:47.634281 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b4dcb64d-77a9-4bbe-bae1-690b453704cc-bound-sa-token\") pod \"image-registry-5d9d95bf5b-sfj8c\" (UID: \"b4dcb64d-77a9-4bbe-bae1-690b453704cc\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-sfj8c" Jan 21 00:14:47 crc kubenswrapper[5118]: I0121 00:14:47.663497 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-cgwmg"] Jan 21 00:14:47 crc kubenswrapper[5118]: I0121 00:14:47.677265 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cgwmg"] Jan 21 00:14:47 crc kubenswrapper[5118]: I0121 00:14:47.677448 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cgwmg" Jan 21 00:14:47 crc kubenswrapper[5118]: I0121 00:14:47.680179 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Jan 21 00:14:47 crc kubenswrapper[5118]: I0121 00:14:47.714526 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/684a88a4-f9ff-4495-85b9-499e70a2d8b4-utilities\") pod \"redhat-operators-cgwmg\" (UID: \"684a88a4-f9ff-4495-85b9-499e70a2d8b4\") " pod="openshift-marketplace/redhat-operators-cgwmg" Jan 21 00:14:47 crc kubenswrapper[5118]: I0121 00:14:47.714564 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slx7s\" (UniqueName: \"kubernetes.io/projected/684a88a4-f9ff-4495-85b9-499e70a2d8b4-kube-api-access-slx7s\") pod \"redhat-operators-cgwmg\" (UID: \"684a88a4-f9ff-4495-85b9-499e70a2d8b4\") " pod="openshift-marketplace/redhat-operators-cgwmg" Jan 21 00:14:47 crc kubenswrapper[5118]: I0121 00:14:47.714584 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/684a88a4-f9ff-4495-85b9-499e70a2d8b4-catalog-content\") pod \"redhat-operators-cgwmg\" (UID: \"684a88a4-f9ff-4495-85b9-499e70a2d8b4\") " pod="openshift-marketplace/redhat-operators-cgwmg" Jan 21 00:14:47 crc kubenswrapper[5118]: I0121 00:14:47.787345 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5d9d95bf5b-sfj8c" Jan 21 00:14:47 crc kubenswrapper[5118]: I0121 00:14:47.794935 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6wj6k" Jan 21 00:14:47 crc kubenswrapper[5118]: I0121 00:14:47.815338 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/684a88a4-f9ff-4495-85b9-499e70a2d8b4-utilities\") pod \"redhat-operators-cgwmg\" (UID: \"684a88a4-f9ff-4495-85b9-499e70a2d8b4\") " pod="openshift-marketplace/redhat-operators-cgwmg" Jan 21 00:14:47 crc kubenswrapper[5118]: I0121 00:14:47.815381 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-slx7s\" (UniqueName: \"kubernetes.io/projected/684a88a4-f9ff-4495-85b9-499e70a2d8b4-kube-api-access-slx7s\") pod \"redhat-operators-cgwmg\" (UID: \"684a88a4-f9ff-4495-85b9-499e70a2d8b4\") " pod="openshift-marketplace/redhat-operators-cgwmg" Jan 21 00:14:47 crc kubenswrapper[5118]: I0121 00:14:47.815409 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/684a88a4-f9ff-4495-85b9-499e70a2d8b4-catalog-content\") pod \"redhat-operators-cgwmg\" (UID: \"684a88a4-f9ff-4495-85b9-499e70a2d8b4\") " pod="openshift-marketplace/redhat-operators-cgwmg" Jan 21 00:14:47 crc kubenswrapper[5118]: I0121 00:14:47.815870 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/684a88a4-f9ff-4495-85b9-499e70a2d8b4-utilities\") pod \"redhat-operators-cgwmg\" (UID: \"684a88a4-f9ff-4495-85b9-499e70a2d8b4\") " pod="openshift-marketplace/redhat-operators-cgwmg" Jan 21 00:14:47 crc kubenswrapper[5118]: I0121 00:14:47.816126 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/684a88a4-f9ff-4495-85b9-499e70a2d8b4-catalog-content\") pod \"redhat-operators-cgwmg\" (UID: \"684a88a4-f9ff-4495-85b9-499e70a2d8b4\") " pod="openshift-marketplace/redhat-operators-cgwmg" Jan 21 00:14:47 crc kubenswrapper[5118]: I0121 00:14:47.832474 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-slx7s\" (UniqueName: \"kubernetes.io/projected/684a88a4-f9ff-4495-85b9-499e70a2d8b4-kube-api-access-slx7s\") pod \"redhat-operators-cgwmg\" (UID: \"684a88a4-f9ff-4495-85b9-499e70a2d8b4\") " pod="openshift-marketplace/redhat-operators-cgwmg" Jan 21 00:14:47 crc kubenswrapper[5118]: I0121 00:14:47.999559 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cgwmg" Jan 21 00:14:48 crc kubenswrapper[5118]: I0121 00:14:48.175330 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-sfj8c"] Jan 21 00:14:48 crc kubenswrapper[5118]: W0121 00:14:48.205622 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb4dcb64d_77a9_4bbe_bae1_690b453704cc.slice/crio-5635785c5a376a2a53865d0b573f71997343209c308bce4288c7efb56aacc1b6 WatchSource:0}: Error finding container 5635785c5a376a2a53865d0b573f71997343209c308bce4288c7efb56aacc1b6: Status 404 returned error can't find the container with id 5635785c5a376a2a53865d0b573f71997343209c308bce4288c7efb56aacc1b6 Jan 21 00:14:48 crc kubenswrapper[5118]: I0121 00:14:48.245023 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6wj6k"] Jan 21 00:14:48 crc kubenswrapper[5118]: W0121 00:14:48.250977 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8a36e2d2_0658_478e_8105_459a04d0234b.slice/crio-3afc36bca688b1476e0e6c82406fadcc89d9bec322267395c0d8f1faf195bbe0 WatchSource:0}: Error finding container 3afc36bca688b1476e0e6c82406fadcc89d9bec322267395c0d8f1faf195bbe0: Status 404 returned error can't find the container with id 3afc36bca688b1476e0e6c82406fadcc89d9bec322267395c0d8f1faf195bbe0 Jan 21 00:14:48 crc kubenswrapper[5118]: I0121 00:14:48.330004 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6wj6k" event={"ID":"8a36e2d2-0658-478e-8105-459a04d0234b","Type":"ContainerStarted","Data":"3afc36bca688b1476e0e6c82406fadcc89d9bec322267395c0d8f1faf195bbe0"} Jan 21 00:14:48 crc kubenswrapper[5118]: I0121 00:14:48.332150 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5d9d95bf5b-sfj8c" event={"ID":"b4dcb64d-77a9-4bbe-bae1-690b453704cc","Type":"ContainerStarted","Data":"5635785c5a376a2a53865d0b573f71997343209c308bce4288c7efb56aacc1b6"} Jan 21 00:14:48 crc kubenswrapper[5118]: I0121 00:14:48.333894 5118 generic.go:358] "Generic (PLEG): container finished" podID="c35d8860-c9f7-468d-9832-45b92d9d6e1c" containerID="16a3773c7e38a395b436159ddd8ed0ee9388cb60c202f6b39fc40699dfe4ddec" exitCode=0 Jan 21 00:14:48 crc kubenswrapper[5118]: I0121 00:14:48.334036 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gptdw" event={"ID":"c35d8860-c9f7-468d-9832-45b92d9d6e1c","Type":"ContainerDied","Data":"16a3773c7e38a395b436159ddd8ed0ee9388cb60c202f6b39fc40699dfe4ddec"} Jan 21 00:14:48 crc kubenswrapper[5118]: I0121 00:14:48.347440 5118 generic.go:358] "Generic (PLEG): container finished" podID="daccdf66-85c0-49b6-a857-638d2b782a9a" containerID="37f60ddaa25ab2f5d73bb2b44a0ac6991f88417c1cb451f9ba53ba4ed8961dd1" exitCode=0 Jan 21 00:14:48 crc kubenswrapper[5118]: I0121 00:14:48.347616 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-45trt" event={"ID":"daccdf66-85c0-49b6-a857-638d2b782a9a","Type":"ContainerDied","Data":"37f60ddaa25ab2f5d73bb2b44a0ac6991f88417c1cb451f9ba53ba4ed8961dd1"} Jan 21 00:14:48 crc kubenswrapper[5118]: I0121 00:14:48.375565 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cgwmg"] Jan 21 00:14:49 crc kubenswrapper[5118]: I0121 00:14:49.367676 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-45trt" event={"ID":"daccdf66-85c0-49b6-a857-638d2b782a9a","Type":"ContainerStarted","Data":"6796be2d8ca6bfb4c13977e4fc343baa5687376b6a30d56eba32c051c6289d2a"} Jan 21 00:14:49 crc kubenswrapper[5118]: I0121 00:14:49.368829 5118 generic.go:358] "Generic (PLEG): container finished" podID="8a36e2d2-0658-478e-8105-459a04d0234b" containerID="3c936dd8d7c4f279d12c850389e1c638a6a724c849a5e64bfb09c8ab7dcb2e21" exitCode=0 Jan 21 00:14:49 crc kubenswrapper[5118]: I0121 00:14:49.368923 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6wj6k" event={"ID":"8a36e2d2-0658-478e-8105-459a04d0234b","Type":"ContainerDied","Data":"3c936dd8d7c4f279d12c850389e1c638a6a724c849a5e64bfb09c8ab7dcb2e21"} Jan 21 00:14:49 crc kubenswrapper[5118]: I0121 00:14:49.370358 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5d9d95bf5b-sfj8c" event={"ID":"b4dcb64d-77a9-4bbe-bae1-690b453704cc","Type":"ContainerStarted","Data":"389c25f3db5d9a2fe2e48020d960b78e6d511742cef9647b3c8a1c15e772ccf8"} Jan 21 00:14:49 crc kubenswrapper[5118]: I0121 00:14:49.370467 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-5d9d95bf5b-sfj8c" Jan 21 00:14:49 crc kubenswrapper[5118]: I0121 00:14:49.372405 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gptdw" event={"ID":"c35d8860-c9f7-468d-9832-45b92d9d6e1c","Type":"ContainerStarted","Data":"ff6aa004f59cba2bcb20061d8d0210b0d5519c4e25e7741950e864151b5f2db7"} Jan 21 00:14:49 crc kubenswrapper[5118]: I0121 00:14:49.373845 5118 generic.go:358] "Generic (PLEG): container finished" podID="684a88a4-f9ff-4495-85b9-499e70a2d8b4" containerID="53a8d0029cd3534f3e7fe8791913d01740afe7c3664ad01652b7db0777bae7f9" exitCode=0 Jan 21 00:14:49 crc kubenswrapper[5118]: I0121 00:14:49.373881 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cgwmg" event={"ID":"684a88a4-f9ff-4495-85b9-499e70a2d8b4","Type":"ContainerDied","Data":"53a8d0029cd3534f3e7fe8791913d01740afe7c3664ad01652b7db0777bae7f9"} Jan 21 00:14:49 crc kubenswrapper[5118]: I0121 00:14:49.373998 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cgwmg" event={"ID":"684a88a4-f9ff-4495-85b9-499e70a2d8b4","Type":"ContainerStarted","Data":"c683d5c138282538aea47cad4bf443ebcfacc55a0953326702244c530b9e7b35"} Jan 21 00:14:49 crc kubenswrapper[5118]: I0121 00:14:49.394669 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-45trt" podStartSLOduration=3.612827196 podStartE2EDuration="4.394655674s" podCreationTimestamp="2026-01-21 00:14:45 +0000 UTC" firstStartedPulling="2026-01-21 00:14:46.312624127 +0000 UTC m=+341.636871145" lastFinishedPulling="2026-01-21 00:14:47.094452605 +0000 UTC m=+342.418699623" observedRunningTime="2026-01-21 00:14:49.392288429 +0000 UTC m=+344.716535447" watchObservedRunningTime="2026-01-21 00:14:49.394655674 +0000 UTC m=+344.718902692" Jan 21 00:14:49 crc kubenswrapper[5118]: I0121 00:14:49.411903 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-5d9d95bf5b-sfj8c" podStartSLOduration=2.411886587 podStartE2EDuration="2.411886587s" podCreationTimestamp="2026-01-21 00:14:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:14:49.408696359 +0000 UTC m=+344.732943407" watchObservedRunningTime="2026-01-21 00:14:49.411886587 +0000 UTC m=+344.736133625" Jan 21 00:14:49 crc kubenswrapper[5118]: I0121 00:14:49.453077 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-gptdw" podStartSLOduration=4.721741257 podStartE2EDuration="5.453060868s" podCreationTimestamp="2026-01-21 00:14:44 +0000 UTC" firstStartedPulling="2026-01-21 00:14:46.314950291 +0000 UTC m=+341.639197309" lastFinishedPulling="2026-01-21 00:14:47.046269902 +0000 UTC m=+342.370516920" observedRunningTime="2026-01-21 00:14:49.452246605 +0000 UTC m=+344.776493653" watchObservedRunningTime="2026-01-21 00:14:49.453060868 +0000 UTC m=+344.777307886" Jan 21 00:14:51 crc kubenswrapper[5118]: I0121 00:14:51.392297 5118 generic.go:358] "Generic (PLEG): container finished" podID="8a36e2d2-0658-478e-8105-459a04d0234b" containerID="bb6afa2612c81fb855c871f0ed3652f6e063afcaa8a951f8836db06626752119" exitCode=0 Jan 21 00:14:51 crc kubenswrapper[5118]: I0121 00:14:51.392376 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6wj6k" event={"ID":"8a36e2d2-0658-478e-8105-459a04d0234b","Type":"ContainerDied","Data":"bb6afa2612c81fb855c871f0ed3652f6e063afcaa8a951f8836db06626752119"} Jan 21 00:14:51 crc kubenswrapper[5118]: I0121 00:14:51.394289 5118 generic.go:358] "Generic (PLEG): container finished" podID="684a88a4-f9ff-4495-85b9-499e70a2d8b4" containerID="827d5542a8ed8e5a3120d329e6e7c29429acc20eff7c24edcfdd73b13b3cf741" exitCode=0 Jan 21 00:14:51 crc kubenswrapper[5118]: I0121 00:14:51.394432 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cgwmg" event={"ID":"684a88a4-f9ff-4495-85b9-499e70a2d8b4","Type":"ContainerDied","Data":"827d5542a8ed8e5a3120d329e6e7c29429acc20eff7c24edcfdd73b13b3cf741"} Jan 21 00:14:52 crc kubenswrapper[5118]: I0121 00:14:52.403805 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cgwmg" event={"ID":"684a88a4-f9ff-4495-85b9-499e70a2d8b4","Type":"ContainerStarted","Data":"6da08f96199358a991d188e236c24acba5247183d706aa5d4f5110af0cf31c80"} Jan 21 00:14:52 crc kubenswrapper[5118]: I0121 00:14:52.406057 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6wj6k" event={"ID":"8a36e2d2-0658-478e-8105-459a04d0234b","Type":"ContainerStarted","Data":"47af2c0ea7edda57758882a7b2d34ac9b9b2daa86db50b1fe30a2371e5339aac"} Jan 21 00:14:52 crc kubenswrapper[5118]: I0121 00:14:52.422756 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-cgwmg" podStartSLOduration=4.54359584 podStartE2EDuration="5.422738559s" podCreationTimestamp="2026-01-21 00:14:47 +0000 UTC" firstStartedPulling="2026-01-21 00:14:49.37483254 +0000 UTC m=+344.699079558" lastFinishedPulling="2026-01-21 00:14:50.253975259 +0000 UTC m=+345.578222277" observedRunningTime="2026-01-21 00:14:52.422171414 +0000 UTC m=+347.746418452" watchObservedRunningTime="2026-01-21 00:14:52.422738559 +0000 UTC m=+347.746985577" Jan 21 00:14:52 crc kubenswrapper[5118]: I0121 00:14:52.444631 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6wj6k" podStartSLOduration=4.252928328 podStartE2EDuration="5.44461375s" podCreationTimestamp="2026-01-21 00:14:47 +0000 UTC" firstStartedPulling="2026-01-21 00:14:49.369702489 +0000 UTC m=+344.693949507" lastFinishedPulling="2026-01-21 00:14:50.561387911 +0000 UTC m=+345.885634929" observedRunningTime="2026-01-21 00:14:52.443782837 +0000 UTC m=+347.768029855" watchObservedRunningTime="2026-01-21 00:14:52.44461375 +0000 UTC m=+347.768860768" Jan 21 00:14:55 crc kubenswrapper[5118]: I0121 00:14:55.212726 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-gptdw" Jan 21 00:14:55 crc kubenswrapper[5118]: I0121 00:14:55.213060 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-gptdw" Jan 21 00:14:55 crc kubenswrapper[5118]: I0121 00:14:55.251226 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-gptdw" Jan 21 00:14:55 crc kubenswrapper[5118]: I0121 00:14:55.397719 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-45trt" Jan 21 00:14:55 crc kubenswrapper[5118]: I0121 00:14:55.397783 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-45trt" Jan 21 00:14:55 crc kubenswrapper[5118]: I0121 00:14:55.440219 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-45trt" Jan 21 00:14:55 crc kubenswrapper[5118]: I0121 00:14:55.459736 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-gptdw" Jan 21 00:14:55 crc kubenswrapper[5118]: I0121 00:14:55.484641 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-45trt" Jan 21 00:14:57 crc kubenswrapper[5118]: I0121 00:14:57.796140 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6wj6k" Jan 21 00:14:57 crc kubenswrapper[5118]: I0121 00:14:57.797836 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-6wj6k" Jan 21 00:14:57 crc kubenswrapper[5118]: I0121 00:14:57.849347 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6wj6k" Jan 21 00:14:58 crc kubenswrapper[5118]: I0121 00:14:58.000140 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-cgwmg" Jan 21 00:14:58 crc kubenswrapper[5118]: I0121 00:14:58.000206 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-cgwmg" Jan 21 00:14:58 crc kubenswrapper[5118]: I0121 00:14:58.035140 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-cgwmg" Jan 21 00:14:58 crc kubenswrapper[5118]: I0121 00:14:58.477501 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-cgwmg" Jan 21 00:14:58 crc kubenswrapper[5118]: I0121 00:14:58.477774 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6wj6k" Jan 21 00:15:00 crc kubenswrapper[5118]: I0121 00:15:00.142265 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482575-57nmh"] Jan 21 00:15:00 crc kubenswrapper[5118]: I0121 00:15:00.154359 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482575-57nmh"] Jan 21 00:15:00 crc kubenswrapper[5118]: I0121 00:15:00.154511 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482575-57nmh" Jan 21 00:15:00 crc kubenswrapper[5118]: I0121 00:15:00.156627 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Jan 21 00:15:00 crc kubenswrapper[5118]: I0121 00:15:00.156820 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Jan 21 00:15:00 crc kubenswrapper[5118]: I0121 00:15:00.287788 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/44ae8c4f-2bd0-4fbb-ba91-007c932ee1bc-config-volume\") pod \"collect-profiles-29482575-57nmh\" (UID: \"44ae8c4f-2bd0-4fbb-ba91-007c932ee1bc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482575-57nmh" Jan 21 00:15:00 crc kubenswrapper[5118]: I0121 00:15:00.288138 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/44ae8c4f-2bd0-4fbb-ba91-007c932ee1bc-secret-volume\") pod \"collect-profiles-29482575-57nmh\" (UID: \"44ae8c4f-2bd0-4fbb-ba91-007c932ee1bc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482575-57nmh" Jan 21 00:15:00 crc kubenswrapper[5118]: I0121 00:15:00.288355 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdz44\" (UniqueName: \"kubernetes.io/projected/44ae8c4f-2bd0-4fbb-ba91-007c932ee1bc-kube-api-access-zdz44\") pod \"collect-profiles-29482575-57nmh\" (UID: \"44ae8c4f-2bd0-4fbb-ba91-007c932ee1bc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482575-57nmh" Jan 21 00:15:00 crc kubenswrapper[5118]: I0121 00:15:00.389817 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/44ae8c4f-2bd0-4fbb-ba91-007c932ee1bc-config-volume\") pod \"collect-profiles-29482575-57nmh\" (UID: \"44ae8c4f-2bd0-4fbb-ba91-007c932ee1bc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482575-57nmh" Jan 21 00:15:00 crc kubenswrapper[5118]: I0121 00:15:00.389870 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/44ae8c4f-2bd0-4fbb-ba91-007c932ee1bc-secret-volume\") pod \"collect-profiles-29482575-57nmh\" (UID: \"44ae8c4f-2bd0-4fbb-ba91-007c932ee1bc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482575-57nmh" Jan 21 00:15:00 crc kubenswrapper[5118]: I0121 00:15:00.389915 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zdz44\" (UniqueName: \"kubernetes.io/projected/44ae8c4f-2bd0-4fbb-ba91-007c932ee1bc-kube-api-access-zdz44\") pod \"collect-profiles-29482575-57nmh\" (UID: \"44ae8c4f-2bd0-4fbb-ba91-007c932ee1bc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482575-57nmh" Jan 21 00:15:00 crc kubenswrapper[5118]: I0121 00:15:00.391396 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/44ae8c4f-2bd0-4fbb-ba91-007c932ee1bc-config-volume\") pod \"collect-profiles-29482575-57nmh\" (UID: \"44ae8c4f-2bd0-4fbb-ba91-007c932ee1bc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482575-57nmh" Jan 21 00:15:00 crc kubenswrapper[5118]: I0121 00:15:00.400413 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/44ae8c4f-2bd0-4fbb-ba91-007c932ee1bc-secret-volume\") pod \"collect-profiles-29482575-57nmh\" (UID: \"44ae8c4f-2bd0-4fbb-ba91-007c932ee1bc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482575-57nmh" Jan 21 00:15:00 crc kubenswrapper[5118]: I0121 00:15:00.410768 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdz44\" (UniqueName: \"kubernetes.io/projected/44ae8c4f-2bd0-4fbb-ba91-007c932ee1bc-kube-api-access-zdz44\") pod \"collect-profiles-29482575-57nmh\" (UID: \"44ae8c4f-2bd0-4fbb-ba91-007c932ee1bc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482575-57nmh" Jan 21 00:15:00 crc kubenswrapper[5118]: I0121 00:15:00.478835 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482575-57nmh" Jan 21 00:15:00 crc kubenswrapper[5118]: I0121 00:15:00.923969 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482575-57nmh"] Jan 21 00:15:00 crc kubenswrapper[5118]: W0121 00:15:00.928956 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod44ae8c4f_2bd0_4fbb_ba91_007c932ee1bc.slice/crio-4b5621c6b24c0c3973fd421f3954c5c0259b6004e326557fa1d865bfbefa80b0 WatchSource:0}: Error finding container 4b5621c6b24c0c3973fd421f3954c5c0259b6004e326557fa1d865bfbefa80b0: Status 404 returned error can't find the container with id 4b5621c6b24c0c3973fd421f3954c5c0259b6004e326557fa1d865bfbefa80b0 Jan 21 00:15:01 crc kubenswrapper[5118]: I0121 00:15:01.456235 5118 generic.go:358] "Generic (PLEG): container finished" podID="44ae8c4f-2bd0-4fbb-ba91-007c932ee1bc" containerID="5b0ced5c5522c08ad128a61d92edadfd696512390ea9a6bedb16395d6bbb4a3d" exitCode=0 Jan 21 00:15:01 crc kubenswrapper[5118]: I0121 00:15:01.456300 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482575-57nmh" event={"ID":"44ae8c4f-2bd0-4fbb-ba91-007c932ee1bc","Type":"ContainerDied","Data":"5b0ced5c5522c08ad128a61d92edadfd696512390ea9a6bedb16395d6bbb4a3d"} Jan 21 00:15:01 crc kubenswrapper[5118]: I0121 00:15:01.456619 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482575-57nmh" event={"ID":"44ae8c4f-2bd0-4fbb-ba91-007c932ee1bc","Type":"ContainerStarted","Data":"4b5621c6b24c0c3973fd421f3954c5c0259b6004e326557fa1d865bfbefa80b0"} Jan 21 00:15:02 crc kubenswrapper[5118]: I0121 00:15:02.677644 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482575-57nmh" Jan 21 00:15:02 crc kubenswrapper[5118]: I0121 00:15:02.829128 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/44ae8c4f-2bd0-4fbb-ba91-007c932ee1bc-config-volume\") pod \"44ae8c4f-2bd0-4fbb-ba91-007c932ee1bc\" (UID: \"44ae8c4f-2bd0-4fbb-ba91-007c932ee1bc\") " Jan 21 00:15:02 crc kubenswrapper[5118]: I0121 00:15:02.829276 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/44ae8c4f-2bd0-4fbb-ba91-007c932ee1bc-secret-volume\") pod \"44ae8c4f-2bd0-4fbb-ba91-007c932ee1bc\" (UID: \"44ae8c4f-2bd0-4fbb-ba91-007c932ee1bc\") " Jan 21 00:15:02 crc kubenswrapper[5118]: I0121 00:15:02.829369 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zdz44\" (UniqueName: \"kubernetes.io/projected/44ae8c4f-2bd0-4fbb-ba91-007c932ee1bc-kube-api-access-zdz44\") pod \"44ae8c4f-2bd0-4fbb-ba91-007c932ee1bc\" (UID: \"44ae8c4f-2bd0-4fbb-ba91-007c932ee1bc\") " Jan 21 00:15:02 crc kubenswrapper[5118]: I0121 00:15:02.829924 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44ae8c4f-2bd0-4fbb-ba91-007c932ee1bc-config-volume" (OuterVolumeSpecName: "config-volume") pod "44ae8c4f-2bd0-4fbb-ba91-007c932ee1bc" (UID: "44ae8c4f-2bd0-4fbb-ba91-007c932ee1bc"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 00:15:02 crc kubenswrapper[5118]: I0121 00:15:02.838354 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44ae8c4f-2bd0-4fbb-ba91-007c932ee1bc-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "44ae8c4f-2bd0-4fbb-ba91-007c932ee1bc" (UID: "44ae8c4f-2bd0-4fbb-ba91-007c932ee1bc"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 00:15:02 crc kubenswrapper[5118]: I0121 00:15:02.841351 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44ae8c4f-2bd0-4fbb-ba91-007c932ee1bc-kube-api-access-zdz44" (OuterVolumeSpecName: "kube-api-access-zdz44") pod "44ae8c4f-2bd0-4fbb-ba91-007c932ee1bc" (UID: "44ae8c4f-2bd0-4fbb-ba91-007c932ee1bc"). InnerVolumeSpecName "kube-api-access-zdz44". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:15:02 crc kubenswrapper[5118]: I0121 00:15:02.930483 5118 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/44ae8c4f-2bd0-4fbb-ba91-007c932ee1bc-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 00:15:02 crc kubenswrapper[5118]: I0121 00:15:02.930519 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zdz44\" (UniqueName: \"kubernetes.io/projected/44ae8c4f-2bd0-4fbb-ba91-007c932ee1bc-kube-api-access-zdz44\") on node \"crc\" DevicePath \"\"" Jan 21 00:15:02 crc kubenswrapper[5118]: I0121 00:15:02.930528 5118 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/44ae8c4f-2bd0-4fbb-ba91-007c932ee1bc-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 00:15:03 crc kubenswrapper[5118]: I0121 00:15:03.468193 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482575-57nmh" event={"ID":"44ae8c4f-2bd0-4fbb-ba91-007c932ee1bc","Type":"ContainerDied","Data":"4b5621c6b24c0c3973fd421f3954c5c0259b6004e326557fa1d865bfbefa80b0"} Jan 21 00:15:03 crc kubenswrapper[5118]: I0121 00:15:03.468228 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482575-57nmh" Jan 21 00:15:03 crc kubenswrapper[5118]: I0121 00:15:03.468240 5118 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b5621c6b24c0c3973fd421f3954c5c0259b6004e326557fa1d865bfbefa80b0" Jan 21 00:15:05 crc kubenswrapper[5118]: I0121 00:15:05.325425 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-765c599d67-96mkg"] Jan 21 00:15:05 crc kubenswrapper[5118]: I0121 00:15:05.326048 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-765c599d67-96mkg" podUID="92b258d3-fca5-41a8-ab0e-80d50a26db7b" containerName="controller-manager" containerID="cri-o://44d41c951009faaf8403ec57a7e4f420f02131f6f5ae3d25db4a9c7b7b240377" gracePeriod=30 Jan 21 00:15:05 crc kubenswrapper[5118]: I0121 00:15:05.341078 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68f4794877-bj9qq"] Jan 21 00:15:05 crc kubenswrapper[5118]: I0121 00:15:05.341389 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-68f4794877-bj9qq" podUID="58609d84-bc4a-4d86-b809-b325339ceded" containerName="route-controller-manager" containerID="cri-o://e680e479c4e636b91fd2974243a1ba95ce4bd187e6b03032a82459784c0157cb" gracePeriod=30 Jan 21 00:15:07 crc kubenswrapper[5118]: I0121 00:15:07.492919 5118 generic.go:358] "Generic (PLEG): container finished" podID="92b258d3-fca5-41a8-ab0e-80d50a26db7b" containerID="44d41c951009faaf8403ec57a7e4f420f02131f6f5ae3d25db4a9c7b7b240377" exitCode=0 Jan 21 00:15:07 crc kubenswrapper[5118]: I0121 00:15:07.493377 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-765c599d67-96mkg" event={"ID":"92b258d3-fca5-41a8-ab0e-80d50a26db7b","Type":"ContainerDied","Data":"44d41c951009faaf8403ec57a7e4f420f02131f6f5ae3d25db4a9c7b7b240377"} Jan 21 00:15:07 crc kubenswrapper[5118]: I0121 00:15:07.494920 5118 generic.go:358] "Generic (PLEG): container finished" podID="58609d84-bc4a-4d86-b809-b325339ceded" containerID="e680e479c4e636b91fd2974243a1ba95ce4bd187e6b03032a82459784c0157cb" exitCode=0 Jan 21 00:15:07 crc kubenswrapper[5118]: I0121 00:15:07.495009 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-68f4794877-bj9qq" event={"ID":"58609d84-bc4a-4d86-b809-b325339ceded","Type":"ContainerDied","Data":"e680e479c4e636b91fd2974243a1ba95ce4bd187e6b03032a82459784c0157cb"} Jan 21 00:15:07 crc kubenswrapper[5118]: I0121 00:15:07.639738 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-68f4794877-bj9qq" Jan 21 00:15:07 crc kubenswrapper[5118]: I0121 00:15:07.645098 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-765c599d67-96mkg" Jan 21 00:15:07 crc kubenswrapper[5118]: I0121 00:15:07.672910 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84c66bb6b6-6dx2z"] Jan 21 00:15:07 crc kubenswrapper[5118]: I0121 00:15:07.677251 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="92b258d3-fca5-41a8-ab0e-80d50a26db7b" containerName="controller-manager" Jan 21 00:15:07 crc kubenswrapper[5118]: I0121 00:15:07.677281 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="92b258d3-fca5-41a8-ab0e-80d50a26db7b" containerName="controller-manager" Jan 21 00:15:07 crc kubenswrapper[5118]: I0121 00:15:07.677294 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="58609d84-bc4a-4d86-b809-b325339ceded" containerName="route-controller-manager" Jan 21 00:15:07 crc kubenswrapper[5118]: I0121 00:15:07.677300 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="58609d84-bc4a-4d86-b809-b325339ceded" containerName="route-controller-manager" Jan 21 00:15:07 crc kubenswrapper[5118]: I0121 00:15:07.677315 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="44ae8c4f-2bd0-4fbb-ba91-007c932ee1bc" containerName="collect-profiles" Jan 21 00:15:07 crc kubenswrapper[5118]: I0121 00:15:07.677322 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="44ae8c4f-2bd0-4fbb-ba91-007c932ee1bc" containerName="collect-profiles" Jan 21 00:15:07 crc kubenswrapper[5118]: I0121 00:15:07.677442 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="58609d84-bc4a-4d86-b809-b325339ceded" containerName="route-controller-manager" Jan 21 00:15:07 crc kubenswrapper[5118]: I0121 00:15:07.677452 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="44ae8c4f-2bd0-4fbb-ba91-007c932ee1bc" containerName="collect-profiles" Jan 21 00:15:07 crc kubenswrapper[5118]: I0121 00:15:07.677460 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="92b258d3-fca5-41a8-ab0e-80d50a26db7b" containerName="controller-manager" Jan 21 00:15:07 crc kubenswrapper[5118]: I0121 00:15:07.693443 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84c66bb6b6-6dx2z"] Jan 21 00:15:07 crc kubenswrapper[5118]: I0121 00:15:07.693664 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-84c66bb6b6-6dx2z" Jan 21 00:15:07 crc kubenswrapper[5118]: I0121 00:15:07.694317 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-588787f94b-cgzf5"] Jan 21 00:15:07 crc kubenswrapper[5118]: I0121 00:15:07.701089 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-588787f94b-cgzf5" Jan 21 00:15:07 crc kubenswrapper[5118]: I0121 00:15:07.709662 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-588787f94b-cgzf5"] Jan 21 00:15:07 crc kubenswrapper[5118]: I0121 00:15:07.798814 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/58609d84-bc4a-4d86-b809-b325339ceded-tmp\") pod \"58609d84-bc4a-4d86-b809-b325339ceded\" (UID: \"58609d84-bc4a-4d86-b809-b325339ceded\") " Jan 21 00:15:07 crc kubenswrapper[5118]: I0121 00:15:07.798887 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58609d84-bc4a-4d86-b809-b325339ceded-config\") pod \"58609d84-bc4a-4d86-b809-b325339ceded\" (UID: \"58609d84-bc4a-4d86-b809-b325339ceded\") " Jan 21 00:15:07 crc kubenswrapper[5118]: I0121 00:15:07.798911 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/92b258d3-fca5-41a8-ab0e-80d50a26db7b-tmp\") pod \"92b258d3-fca5-41a8-ab0e-80d50a26db7b\" (UID: \"92b258d3-fca5-41a8-ab0e-80d50a26db7b\") " Jan 21 00:15:07 crc kubenswrapper[5118]: I0121 00:15:07.798932 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/58609d84-bc4a-4d86-b809-b325339ceded-client-ca\") pod \"58609d84-bc4a-4d86-b809-b325339ceded\" (UID: \"58609d84-bc4a-4d86-b809-b325339ceded\") " Jan 21 00:15:07 crc kubenswrapper[5118]: I0121 00:15:07.798952 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v7gg9\" (UniqueName: \"kubernetes.io/projected/58609d84-bc4a-4d86-b809-b325339ceded-kube-api-access-v7gg9\") pod \"58609d84-bc4a-4d86-b809-b325339ceded\" (UID: \"58609d84-bc4a-4d86-b809-b325339ceded\") " Jan 21 00:15:07 crc kubenswrapper[5118]: I0121 00:15:07.798980 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/92b258d3-fca5-41a8-ab0e-80d50a26db7b-proxy-ca-bundles\") pod \"92b258d3-fca5-41a8-ab0e-80d50a26db7b\" (UID: \"92b258d3-fca5-41a8-ab0e-80d50a26db7b\") " Jan 21 00:15:07 crc kubenswrapper[5118]: I0121 00:15:07.799053 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2zlz2\" (UniqueName: \"kubernetes.io/projected/92b258d3-fca5-41a8-ab0e-80d50a26db7b-kube-api-access-2zlz2\") pod \"92b258d3-fca5-41a8-ab0e-80d50a26db7b\" (UID: \"92b258d3-fca5-41a8-ab0e-80d50a26db7b\") " Jan 21 00:15:07 crc kubenswrapper[5118]: I0121 00:15:07.799078 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92b258d3-fca5-41a8-ab0e-80d50a26db7b-config\") pod \"92b258d3-fca5-41a8-ab0e-80d50a26db7b\" (UID: \"92b258d3-fca5-41a8-ab0e-80d50a26db7b\") " Jan 21 00:15:07 crc kubenswrapper[5118]: I0121 00:15:07.799119 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/58609d84-bc4a-4d86-b809-b325339ceded-serving-cert\") pod \"58609d84-bc4a-4d86-b809-b325339ceded\" (UID: \"58609d84-bc4a-4d86-b809-b325339ceded\") " Jan 21 00:15:07 crc kubenswrapper[5118]: I0121 00:15:07.799171 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/92b258d3-fca5-41a8-ab0e-80d50a26db7b-client-ca\") pod \"92b258d3-fca5-41a8-ab0e-80d50a26db7b\" (UID: \"92b258d3-fca5-41a8-ab0e-80d50a26db7b\") " Jan 21 00:15:07 crc kubenswrapper[5118]: I0121 00:15:07.799201 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/92b258d3-fca5-41a8-ab0e-80d50a26db7b-serving-cert\") pod \"92b258d3-fca5-41a8-ab0e-80d50a26db7b\" (UID: \"92b258d3-fca5-41a8-ab0e-80d50a26db7b\") " Jan 21 00:15:07 crc kubenswrapper[5118]: I0121 00:15:07.799301 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f142c5db-ee2b-442f-9702-dafbaf6da994-config\") pod \"route-controller-manager-84c66bb6b6-6dx2z\" (UID: \"f142c5db-ee2b-442f-9702-dafbaf6da994\") " pod="openshift-route-controller-manager/route-controller-manager-84c66bb6b6-6dx2z" Jan 21 00:15:07 crc kubenswrapper[5118]: I0121 00:15:07.799325 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7c6b75bb-d8be-4aff-b760-cd0e074d7e49-tmp\") pod \"controller-manager-588787f94b-cgzf5\" (UID: \"7c6b75bb-d8be-4aff-b760-cd0e074d7e49\") " pod="openshift-controller-manager/controller-manager-588787f94b-cgzf5" Jan 21 00:15:07 crc kubenswrapper[5118]: I0121 00:15:07.799352 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c6b75bb-d8be-4aff-b760-cd0e074d7e49-serving-cert\") pod \"controller-manager-588787f94b-cgzf5\" (UID: \"7c6b75bb-d8be-4aff-b760-cd0e074d7e49\") " pod="openshift-controller-manager/controller-manager-588787f94b-cgzf5" Jan 21 00:15:07 crc kubenswrapper[5118]: I0121 00:15:07.799370 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c6b75bb-d8be-4aff-b760-cd0e074d7e49-config\") pod \"controller-manager-588787f94b-cgzf5\" (UID: \"7c6b75bb-d8be-4aff-b760-cd0e074d7e49\") " pod="openshift-controller-manager/controller-manager-588787f94b-cgzf5" Jan 21 00:15:07 crc kubenswrapper[5118]: I0121 00:15:07.799397 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f142c5db-ee2b-442f-9702-dafbaf6da994-client-ca\") pod \"route-controller-manager-84c66bb6b6-6dx2z\" (UID: \"f142c5db-ee2b-442f-9702-dafbaf6da994\") " pod="openshift-route-controller-manager/route-controller-manager-84c66bb6b6-6dx2z" Jan 21 00:15:07 crc kubenswrapper[5118]: I0121 00:15:07.799420 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f142c5db-ee2b-442f-9702-dafbaf6da994-serving-cert\") pod \"route-controller-manager-84c66bb6b6-6dx2z\" (UID: \"f142c5db-ee2b-442f-9702-dafbaf6da994\") " pod="openshift-route-controller-manager/route-controller-manager-84c66bb6b6-6dx2z" Jan 21 00:15:07 crc kubenswrapper[5118]: I0121 00:15:07.799446 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bx26n\" (UniqueName: \"kubernetes.io/projected/7c6b75bb-d8be-4aff-b760-cd0e074d7e49-kube-api-access-bx26n\") pod \"controller-manager-588787f94b-cgzf5\" (UID: \"7c6b75bb-d8be-4aff-b760-cd0e074d7e49\") " pod="openshift-controller-manager/controller-manager-588787f94b-cgzf5" Jan 21 00:15:07 crc kubenswrapper[5118]: I0121 00:15:07.799470 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f142c5db-ee2b-442f-9702-dafbaf6da994-tmp\") pod \"route-controller-manager-84c66bb6b6-6dx2z\" (UID: \"f142c5db-ee2b-442f-9702-dafbaf6da994\") " pod="openshift-route-controller-manager/route-controller-manager-84c66bb6b6-6dx2z" Jan 21 00:15:07 crc kubenswrapper[5118]: I0121 00:15:07.799499 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7c6b75bb-d8be-4aff-b760-cd0e074d7e49-client-ca\") pod \"controller-manager-588787f94b-cgzf5\" (UID: \"7c6b75bb-d8be-4aff-b760-cd0e074d7e49\") " pod="openshift-controller-manager/controller-manager-588787f94b-cgzf5" Jan 21 00:15:07 crc kubenswrapper[5118]: I0121 00:15:07.799552 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phbzv\" (UniqueName: \"kubernetes.io/projected/f142c5db-ee2b-442f-9702-dafbaf6da994-kube-api-access-phbzv\") pod \"route-controller-manager-84c66bb6b6-6dx2z\" (UID: \"f142c5db-ee2b-442f-9702-dafbaf6da994\") " pod="openshift-route-controller-manager/route-controller-manager-84c66bb6b6-6dx2z" Jan 21 00:15:07 crc kubenswrapper[5118]: I0121 00:15:07.799569 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7c6b75bb-d8be-4aff-b760-cd0e074d7e49-proxy-ca-bundles\") pod \"controller-manager-588787f94b-cgzf5\" (UID: \"7c6b75bb-d8be-4aff-b760-cd0e074d7e49\") " pod="openshift-controller-manager/controller-manager-588787f94b-cgzf5" Jan 21 00:15:07 crc kubenswrapper[5118]: I0121 00:15:07.800131 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58609d84-bc4a-4d86-b809-b325339ceded-tmp" (OuterVolumeSpecName: "tmp") pod "58609d84-bc4a-4d86-b809-b325339ceded" (UID: "58609d84-bc4a-4d86-b809-b325339ceded"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 00:15:07 crc kubenswrapper[5118]: I0121 00:15:07.800401 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92b258d3-fca5-41a8-ab0e-80d50a26db7b-tmp" (OuterVolumeSpecName: "tmp") pod "92b258d3-fca5-41a8-ab0e-80d50a26db7b" (UID: "92b258d3-fca5-41a8-ab0e-80d50a26db7b"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 00:15:07 crc kubenswrapper[5118]: I0121 00:15:07.800713 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58609d84-bc4a-4d86-b809-b325339ceded-config" (OuterVolumeSpecName: "config") pod "58609d84-bc4a-4d86-b809-b325339ceded" (UID: "58609d84-bc4a-4d86-b809-b325339ceded"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 00:15:07 crc kubenswrapper[5118]: I0121 00:15:07.800810 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58609d84-bc4a-4d86-b809-b325339ceded-client-ca" (OuterVolumeSpecName: "client-ca") pod "58609d84-bc4a-4d86-b809-b325339ceded" (UID: "58609d84-bc4a-4d86-b809-b325339ceded"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 00:15:07 crc kubenswrapper[5118]: I0121 00:15:07.801103 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92b258d3-fca5-41a8-ab0e-80d50a26db7b-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "92b258d3-fca5-41a8-ab0e-80d50a26db7b" (UID: "92b258d3-fca5-41a8-ab0e-80d50a26db7b"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 00:15:07 crc kubenswrapper[5118]: I0121 00:15:07.801370 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92b258d3-fca5-41a8-ab0e-80d50a26db7b-client-ca" (OuterVolumeSpecName: "client-ca") pod "92b258d3-fca5-41a8-ab0e-80d50a26db7b" (UID: "92b258d3-fca5-41a8-ab0e-80d50a26db7b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 00:15:07 crc kubenswrapper[5118]: I0121 00:15:07.801673 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92b258d3-fca5-41a8-ab0e-80d50a26db7b-config" (OuterVolumeSpecName: "config") pod "92b258d3-fca5-41a8-ab0e-80d50a26db7b" (UID: "92b258d3-fca5-41a8-ab0e-80d50a26db7b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 00:15:07 crc kubenswrapper[5118]: I0121 00:15:07.805922 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58609d84-bc4a-4d86-b809-b325339ceded-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "58609d84-bc4a-4d86-b809-b325339ceded" (UID: "58609d84-bc4a-4d86-b809-b325339ceded"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 00:15:07 crc kubenswrapper[5118]: I0121 00:15:07.806194 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58609d84-bc4a-4d86-b809-b325339ceded-kube-api-access-v7gg9" (OuterVolumeSpecName: "kube-api-access-v7gg9") pod "58609d84-bc4a-4d86-b809-b325339ceded" (UID: "58609d84-bc4a-4d86-b809-b325339ceded"). InnerVolumeSpecName "kube-api-access-v7gg9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:15:07 crc kubenswrapper[5118]: I0121 00:15:07.807241 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92b258d3-fca5-41a8-ab0e-80d50a26db7b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "92b258d3-fca5-41a8-ab0e-80d50a26db7b" (UID: "92b258d3-fca5-41a8-ab0e-80d50a26db7b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 00:15:07 crc kubenswrapper[5118]: I0121 00:15:07.832628 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92b258d3-fca5-41a8-ab0e-80d50a26db7b-kube-api-access-2zlz2" (OuterVolumeSpecName: "kube-api-access-2zlz2") pod "92b258d3-fca5-41a8-ab0e-80d50a26db7b" (UID: "92b258d3-fca5-41a8-ab0e-80d50a26db7b"). InnerVolumeSpecName "kube-api-access-2zlz2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:15:07 crc kubenswrapper[5118]: I0121 00:15:07.900927 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-phbzv\" (UniqueName: \"kubernetes.io/projected/f142c5db-ee2b-442f-9702-dafbaf6da994-kube-api-access-phbzv\") pod \"route-controller-manager-84c66bb6b6-6dx2z\" (UID: \"f142c5db-ee2b-442f-9702-dafbaf6da994\") " pod="openshift-route-controller-manager/route-controller-manager-84c66bb6b6-6dx2z" Jan 21 00:15:07 crc kubenswrapper[5118]: I0121 00:15:07.901338 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7c6b75bb-d8be-4aff-b760-cd0e074d7e49-proxy-ca-bundles\") pod \"controller-manager-588787f94b-cgzf5\" (UID: \"7c6b75bb-d8be-4aff-b760-cd0e074d7e49\") " pod="openshift-controller-manager/controller-manager-588787f94b-cgzf5" Jan 21 00:15:07 crc kubenswrapper[5118]: I0121 00:15:07.901401 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f142c5db-ee2b-442f-9702-dafbaf6da994-config\") pod \"route-controller-manager-84c66bb6b6-6dx2z\" (UID: \"f142c5db-ee2b-442f-9702-dafbaf6da994\") " pod="openshift-route-controller-manager/route-controller-manager-84c66bb6b6-6dx2z" Jan 21 00:15:07 crc kubenswrapper[5118]: I0121 00:15:07.901430 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7c6b75bb-d8be-4aff-b760-cd0e074d7e49-tmp\") pod \"controller-manager-588787f94b-cgzf5\" (UID: \"7c6b75bb-d8be-4aff-b760-cd0e074d7e49\") " pod="openshift-controller-manager/controller-manager-588787f94b-cgzf5" Jan 21 00:15:07 crc kubenswrapper[5118]: I0121 00:15:07.901463 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c6b75bb-d8be-4aff-b760-cd0e074d7e49-serving-cert\") pod \"controller-manager-588787f94b-cgzf5\" (UID: \"7c6b75bb-d8be-4aff-b760-cd0e074d7e49\") " pod="openshift-controller-manager/controller-manager-588787f94b-cgzf5" Jan 21 00:15:07 crc kubenswrapper[5118]: I0121 00:15:07.901486 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c6b75bb-d8be-4aff-b760-cd0e074d7e49-config\") pod \"controller-manager-588787f94b-cgzf5\" (UID: \"7c6b75bb-d8be-4aff-b760-cd0e074d7e49\") " pod="openshift-controller-manager/controller-manager-588787f94b-cgzf5" Jan 21 00:15:07 crc kubenswrapper[5118]: I0121 00:15:07.901519 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f142c5db-ee2b-442f-9702-dafbaf6da994-client-ca\") pod \"route-controller-manager-84c66bb6b6-6dx2z\" (UID: \"f142c5db-ee2b-442f-9702-dafbaf6da994\") " pod="openshift-route-controller-manager/route-controller-manager-84c66bb6b6-6dx2z" Jan 21 00:15:07 crc kubenswrapper[5118]: I0121 00:15:07.901549 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f142c5db-ee2b-442f-9702-dafbaf6da994-serving-cert\") pod \"route-controller-manager-84c66bb6b6-6dx2z\" (UID: \"f142c5db-ee2b-442f-9702-dafbaf6da994\") " pod="openshift-route-controller-manager/route-controller-manager-84c66bb6b6-6dx2z" Jan 21 00:15:07 crc kubenswrapper[5118]: I0121 00:15:07.901582 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bx26n\" (UniqueName: \"kubernetes.io/projected/7c6b75bb-d8be-4aff-b760-cd0e074d7e49-kube-api-access-bx26n\") pod \"controller-manager-588787f94b-cgzf5\" (UID: \"7c6b75bb-d8be-4aff-b760-cd0e074d7e49\") " pod="openshift-controller-manager/controller-manager-588787f94b-cgzf5" Jan 21 00:15:07 crc kubenswrapper[5118]: I0121 00:15:07.901762 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f142c5db-ee2b-442f-9702-dafbaf6da994-tmp\") pod \"route-controller-manager-84c66bb6b6-6dx2z\" (UID: \"f142c5db-ee2b-442f-9702-dafbaf6da994\") " pod="openshift-route-controller-manager/route-controller-manager-84c66bb6b6-6dx2z" Jan 21 00:15:07 crc kubenswrapper[5118]: I0121 00:15:07.901788 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7c6b75bb-d8be-4aff-b760-cd0e074d7e49-client-ca\") pod \"controller-manager-588787f94b-cgzf5\" (UID: \"7c6b75bb-d8be-4aff-b760-cd0e074d7e49\") " pod="openshift-controller-manager/controller-manager-588787f94b-cgzf5" Jan 21 00:15:07 crc kubenswrapper[5118]: I0121 00:15:07.901853 5118 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/92b258d3-fca5-41a8-ab0e-80d50a26db7b-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 00:15:07 crc kubenswrapper[5118]: I0121 00:15:07.901867 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2zlz2\" (UniqueName: \"kubernetes.io/projected/92b258d3-fca5-41a8-ab0e-80d50a26db7b-kube-api-access-2zlz2\") on node \"crc\" DevicePath \"\"" Jan 21 00:15:07 crc kubenswrapper[5118]: I0121 00:15:07.901881 5118 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92b258d3-fca5-41a8-ab0e-80d50a26db7b-config\") on node \"crc\" DevicePath \"\"" Jan 21 00:15:07 crc kubenswrapper[5118]: I0121 00:15:07.901893 5118 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/58609d84-bc4a-4d86-b809-b325339ceded-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 00:15:07 crc kubenswrapper[5118]: I0121 00:15:07.901904 5118 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/92b258d3-fca5-41a8-ab0e-80d50a26db7b-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 00:15:07 crc kubenswrapper[5118]: I0121 00:15:07.901915 5118 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/92b258d3-fca5-41a8-ab0e-80d50a26db7b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 00:15:07 crc kubenswrapper[5118]: I0121 00:15:07.901925 5118 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/58609d84-bc4a-4d86-b809-b325339ceded-tmp\") on node \"crc\" DevicePath \"\"" Jan 21 00:15:07 crc kubenswrapper[5118]: I0121 00:15:07.901935 5118 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58609d84-bc4a-4d86-b809-b325339ceded-config\") on node \"crc\" DevicePath \"\"" Jan 21 00:15:07 crc kubenswrapper[5118]: I0121 00:15:07.901947 5118 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/92b258d3-fca5-41a8-ab0e-80d50a26db7b-tmp\") on node \"crc\" DevicePath \"\"" Jan 21 00:15:07 crc kubenswrapper[5118]: I0121 00:15:07.901957 5118 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/58609d84-bc4a-4d86-b809-b325339ceded-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 00:15:07 crc kubenswrapper[5118]: I0121 00:15:07.901968 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-v7gg9\" (UniqueName: \"kubernetes.io/projected/58609d84-bc4a-4d86-b809-b325339ceded-kube-api-access-v7gg9\") on node \"crc\" DevicePath \"\"" Jan 21 00:15:07 crc kubenswrapper[5118]: I0121 00:15:07.902775 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7c6b75bb-d8be-4aff-b760-cd0e074d7e49-client-ca\") pod \"controller-manager-588787f94b-cgzf5\" (UID: \"7c6b75bb-d8be-4aff-b760-cd0e074d7e49\") " pod="openshift-controller-manager/controller-manager-588787f94b-cgzf5" Jan 21 00:15:07 crc kubenswrapper[5118]: I0121 00:15:07.903102 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7c6b75bb-d8be-4aff-b760-cd0e074d7e49-proxy-ca-bundles\") pod \"controller-manager-588787f94b-cgzf5\" (UID: \"7c6b75bb-d8be-4aff-b760-cd0e074d7e49\") " pod="openshift-controller-manager/controller-manager-588787f94b-cgzf5" Jan 21 00:15:07 crc kubenswrapper[5118]: I0121 00:15:07.903624 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7c6b75bb-d8be-4aff-b760-cd0e074d7e49-tmp\") pod \"controller-manager-588787f94b-cgzf5\" (UID: \"7c6b75bb-d8be-4aff-b760-cd0e074d7e49\") " pod="openshift-controller-manager/controller-manager-588787f94b-cgzf5" Jan 21 00:15:07 crc kubenswrapper[5118]: I0121 00:15:07.903742 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f142c5db-ee2b-442f-9702-dafbaf6da994-tmp\") pod \"route-controller-manager-84c66bb6b6-6dx2z\" (UID: \"f142c5db-ee2b-442f-9702-dafbaf6da994\") " pod="openshift-route-controller-manager/route-controller-manager-84c66bb6b6-6dx2z" Jan 21 00:15:07 crc kubenswrapper[5118]: I0121 00:15:07.904449 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f142c5db-ee2b-442f-9702-dafbaf6da994-config\") pod \"route-controller-manager-84c66bb6b6-6dx2z\" (UID: \"f142c5db-ee2b-442f-9702-dafbaf6da994\") " pod="openshift-route-controller-manager/route-controller-manager-84c66bb6b6-6dx2z" Jan 21 00:15:07 crc kubenswrapper[5118]: I0121 00:15:07.904682 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c6b75bb-d8be-4aff-b760-cd0e074d7e49-config\") pod \"controller-manager-588787f94b-cgzf5\" (UID: \"7c6b75bb-d8be-4aff-b760-cd0e074d7e49\") " pod="openshift-controller-manager/controller-manager-588787f94b-cgzf5" Jan 21 00:15:07 crc kubenswrapper[5118]: I0121 00:15:07.904931 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f142c5db-ee2b-442f-9702-dafbaf6da994-client-ca\") pod \"route-controller-manager-84c66bb6b6-6dx2z\" (UID: \"f142c5db-ee2b-442f-9702-dafbaf6da994\") " pod="openshift-route-controller-manager/route-controller-manager-84c66bb6b6-6dx2z" Jan 21 00:15:07 crc kubenswrapper[5118]: I0121 00:15:07.907052 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f142c5db-ee2b-442f-9702-dafbaf6da994-serving-cert\") pod \"route-controller-manager-84c66bb6b6-6dx2z\" (UID: \"f142c5db-ee2b-442f-9702-dafbaf6da994\") " pod="openshift-route-controller-manager/route-controller-manager-84c66bb6b6-6dx2z" Jan 21 00:15:07 crc kubenswrapper[5118]: I0121 00:15:07.907726 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c6b75bb-d8be-4aff-b760-cd0e074d7e49-serving-cert\") pod \"controller-manager-588787f94b-cgzf5\" (UID: \"7c6b75bb-d8be-4aff-b760-cd0e074d7e49\") " pod="openshift-controller-manager/controller-manager-588787f94b-cgzf5" Jan 21 00:15:07 crc kubenswrapper[5118]: I0121 00:15:07.919448 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-phbzv\" (UniqueName: \"kubernetes.io/projected/f142c5db-ee2b-442f-9702-dafbaf6da994-kube-api-access-phbzv\") pod \"route-controller-manager-84c66bb6b6-6dx2z\" (UID: \"f142c5db-ee2b-442f-9702-dafbaf6da994\") " pod="openshift-route-controller-manager/route-controller-manager-84c66bb6b6-6dx2z" Jan 21 00:15:07 crc kubenswrapper[5118]: I0121 00:15:07.923691 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bx26n\" (UniqueName: \"kubernetes.io/projected/7c6b75bb-d8be-4aff-b760-cd0e074d7e49-kube-api-access-bx26n\") pod \"controller-manager-588787f94b-cgzf5\" (UID: \"7c6b75bb-d8be-4aff-b760-cd0e074d7e49\") " pod="openshift-controller-manager/controller-manager-588787f94b-cgzf5" Jan 21 00:15:08 crc kubenswrapper[5118]: I0121 00:15:08.014991 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-84c66bb6b6-6dx2z" Jan 21 00:15:08 crc kubenswrapper[5118]: I0121 00:15:08.022579 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-588787f94b-cgzf5" Jan 21 00:15:08 crc kubenswrapper[5118]: I0121 00:15:08.229892 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84c66bb6b6-6dx2z"] Jan 21 00:15:08 crc kubenswrapper[5118]: W0121 00:15:08.235627 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf142c5db_ee2b_442f_9702_dafbaf6da994.slice/crio-6f9c209fcbeb8b597945d5f4bada2156ee5466ddc37033f7b2683816b68f8475 WatchSource:0}: Error finding container 6f9c209fcbeb8b597945d5f4bada2156ee5466ddc37033f7b2683816b68f8475: Status 404 returned error can't find the container with id 6f9c209fcbeb8b597945d5f4bada2156ee5466ddc37033f7b2683816b68f8475 Jan 21 00:15:08 crc kubenswrapper[5118]: I0121 00:15:08.501066 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-68f4794877-bj9qq" event={"ID":"58609d84-bc4a-4d86-b809-b325339ceded","Type":"ContainerDied","Data":"e5133c8ca2b58c635c3cc302fa592c4a9ab1455bd4db35bacdfbb68c555e9b88"} Jan 21 00:15:08 crc kubenswrapper[5118]: I0121 00:15:08.501407 5118 scope.go:117] "RemoveContainer" containerID="e680e479c4e636b91fd2974243a1ba95ce4bd187e6b03032a82459784c0157cb" Jan 21 00:15:08 crc kubenswrapper[5118]: I0121 00:15:08.501146 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-68f4794877-bj9qq" Jan 21 00:15:08 crc kubenswrapper[5118]: I0121 00:15:08.503624 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-765c599d67-96mkg" event={"ID":"92b258d3-fca5-41a8-ab0e-80d50a26db7b","Type":"ContainerDied","Data":"571120ecee06bc030913714454ec86b6d3bed63fa2c78527f81112a8c6758bf5"} Jan 21 00:15:08 crc kubenswrapper[5118]: I0121 00:15:08.503671 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-765c599d67-96mkg" Jan 21 00:15:08 crc kubenswrapper[5118]: I0121 00:15:08.505344 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-84c66bb6b6-6dx2z" event={"ID":"f142c5db-ee2b-442f-9702-dafbaf6da994","Type":"ContainerStarted","Data":"3dd3457783e564aa859589a070012551a2c722e37250cf09c1d402405cd41241"} Jan 21 00:15:08 crc kubenswrapper[5118]: I0121 00:15:08.505381 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-84c66bb6b6-6dx2z" event={"ID":"f142c5db-ee2b-442f-9702-dafbaf6da994","Type":"ContainerStarted","Data":"6f9c209fcbeb8b597945d5f4bada2156ee5466ddc37033f7b2683816b68f8475"} Jan 21 00:15:08 crc kubenswrapper[5118]: I0121 00:15:08.505593 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-84c66bb6b6-6dx2z" Jan 21 00:15:08 crc kubenswrapper[5118]: I0121 00:15:08.518761 5118 scope.go:117] "RemoveContainer" containerID="44d41c951009faaf8403ec57a7e4f420f02131f6f5ae3d25db4a9c7b7b240377" Jan 21 00:15:08 crc kubenswrapper[5118]: I0121 00:15:08.523039 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-588787f94b-cgzf5"] Jan 21 00:15:08 crc kubenswrapper[5118]: I0121 00:15:08.531309 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-84c66bb6b6-6dx2z" podStartSLOduration=3.53128573 podStartE2EDuration="3.53128573s" podCreationTimestamp="2026-01-21 00:15:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:15:08.528857693 +0000 UTC m=+363.853104711" watchObservedRunningTime="2026-01-21 00:15:08.53128573 +0000 UTC m=+363.855532788" Jan 21 00:15:08 crc kubenswrapper[5118]: I0121 00:15:08.553672 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-765c599d67-96mkg"] Jan 21 00:15:08 crc kubenswrapper[5118]: I0121 00:15:08.571737 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-765c599d67-96mkg"] Jan 21 00:15:08 crc kubenswrapper[5118]: I0121 00:15:08.577670 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68f4794877-bj9qq"] Jan 21 00:15:08 crc kubenswrapper[5118]: I0121 00:15:08.578390 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68f4794877-bj9qq"] Jan 21 00:15:08 crc kubenswrapper[5118]: I0121 00:15:08.982283 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58609d84-bc4a-4d86-b809-b325339ceded" path="/var/lib/kubelet/pods/58609d84-bc4a-4d86-b809-b325339ceded/volumes" Jan 21 00:15:08 crc kubenswrapper[5118]: I0121 00:15:08.982973 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92b258d3-fca5-41a8-ab0e-80d50a26db7b" path="/var/lib/kubelet/pods/92b258d3-fca5-41a8-ab0e-80d50a26db7b/volumes" Jan 21 00:15:09 crc kubenswrapper[5118]: I0121 00:15:09.005246 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-84c66bb6b6-6dx2z" Jan 21 00:15:09 crc kubenswrapper[5118]: I0121 00:15:09.511059 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-588787f94b-cgzf5" event={"ID":"7c6b75bb-d8be-4aff-b760-cd0e074d7e49","Type":"ContainerStarted","Data":"ff2a766f4e35c6c907de7a8600f2bb41dffbe541fa35b6e786d0d5a641a27a6d"} Jan 21 00:15:09 crc kubenswrapper[5118]: I0121 00:15:09.511426 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-588787f94b-cgzf5" event={"ID":"7c6b75bb-d8be-4aff-b760-cd0e074d7e49","Type":"ContainerStarted","Data":"d7da6b57835d280e3ffd3bd27c1645694484e71746bd87ded9b1c1627d847afe"} Jan 21 00:15:09 crc kubenswrapper[5118]: I0121 00:15:09.511448 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-588787f94b-cgzf5" Jan 21 00:15:09 crc kubenswrapper[5118]: I0121 00:15:09.518151 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-588787f94b-cgzf5" Jan 21 00:15:09 crc kubenswrapper[5118]: I0121 00:15:09.527931 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-588787f94b-cgzf5" podStartSLOduration=4.527917845 podStartE2EDuration="4.527917845s" podCreationTimestamp="2026-01-21 00:15:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:15:09.523785312 +0000 UTC m=+364.848032330" watchObservedRunningTime="2026-01-21 00:15:09.527917845 +0000 UTC m=+364.852164863" Jan 21 00:15:10 crc kubenswrapper[5118]: I0121 00:15:10.387945 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-5d9d95bf5b-sfj8c" Jan 21 00:15:10 crc kubenswrapper[5118]: I0121 00:15:10.450026 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-tlb84"] Jan 21 00:15:35 crc kubenswrapper[5118]: I0121 00:15:35.496582 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-image-registry/image-registry-66587d64c8-tlb84" podUID="0d503143-f75b-40e6-b0e3-d1bd595a05ae" containerName="registry" containerID="cri-o://50af60c127f5188bd7aca1af9976285e835780921bf53a2f57465bd5da7f3a21" gracePeriod=30 Jan 21 00:15:36 crc kubenswrapper[5118]: I0121 00:15:36.485448 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-tlb84" Jan 21 00:15:36 crc kubenswrapper[5118]: I0121 00:15:36.582818 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t6jsw\" (UniqueName: \"kubernetes.io/projected/0d503143-f75b-40e6-b0e3-d1bd595a05ae-kube-api-access-t6jsw\") pod \"0d503143-f75b-40e6-b0e3-d1bd595a05ae\" (UID: \"0d503143-f75b-40e6-b0e3-d1bd595a05ae\") " Jan 21 00:15:36 crc kubenswrapper[5118]: I0121 00:15:36.582890 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0d503143-f75b-40e6-b0e3-d1bd595a05ae-registry-certificates\") pod \"0d503143-f75b-40e6-b0e3-d1bd595a05ae\" (UID: \"0d503143-f75b-40e6-b0e3-d1bd595a05ae\") " Jan 21 00:15:36 crc kubenswrapper[5118]: I0121 00:15:36.582950 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0d503143-f75b-40e6-b0e3-d1bd595a05ae-registry-tls\") pod \"0d503143-f75b-40e6-b0e3-d1bd595a05ae\" (UID: \"0d503143-f75b-40e6-b0e3-d1bd595a05ae\") " Jan 21 00:15:36 crc kubenswrapper[5118]: I0121 00:15:36.583056 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0d503143-f75b-40e6-b0e3-d1bd595a05ae-bound-sa-token\") pod \"0d503143-f75b-40e6-b0e3-d1bd595a05ae\" (UID: \"0d503143-f75b-40e6-b0e3-d1bd595a05ae\") " Jan 21 00:15:36 crc kubenswrapper[5118]: I0121 00:15:36.583132 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0d503143-f75b-40e6-b0e3-d1bd595a05ae-installation-pull-secrets\") pod \"0d503143-f75b-40e6-b0e3-d1bd595a05ae\" (UID: \"0d503143-f75b-40e6-b0e3-d1bd595a05ae\") " Jan 21 00:15:36 crc kubenswrapper[5118]: I0121 00:15:36.583254 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0d503143-f75b-40e6-b0e3-d1bd595a05ae-ca-trust-extracted\") pod \"0d503143-f75b-40e6-b0e3-d1bd595a05ae\" (UID: \"0d503143-f75b-40e6-b0e3-d1bd595a05ae\") " Jan 21 00:15:36 crc kubenswrapper[5118]: I0121 00:15:36.583339 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0d503143-f75b-40e6-b0e3-d1bd595a05ae-trusted-ca\") pod \"0d503143-f75b-40e6-b0e3-d1bd595a05ae\" (UID: \"0d503143-f75b-40e6-b0e3-d1bd595a05ae\") " Jan 21 00:15:36 crc kubenswrapper[5118]: I0121 00:15:36.583491 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"0d503143-f75b-40e6-b0e3-d1bd595a05ae\" (UID: \"0d503143-f75b-40e6-b0e3-d1bd595a05ae\") " Jan 21 00:15:36 crc kubenswrapper[5118]: I0121 00:15:36.584089 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d503143-f75b-40e6-b0e3-d1bd595a05ae-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "0d503143-f75b-40e6-b0e3-d1bd595a05ae" (UID: "0d503143-f75b-40e6-b0e3-d1bd595a05ae"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 00:15:36 crc kubenswrapper[5118]: I0121 00:15:36.584195 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d503143-f75b-40e6-b0e3-d1bd595a05ae-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "0d503143-f75b-40e6-b0e3-d1bd595a05ae" (UID: "0d503143-f75b-40e6-b0e3-d1bd595a05ae"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 00:15:36 crc kubenswrapper[5118]: I0121 00:15:36.589068 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d503143-f75b-40e6-b0e3-d1bd595a05ae-kube-api-access-t6jsw" (OuterVolumeSpecName: "kube-api-access-t6jsw") pod "0d503143-f75b-40e6-b0e3-d1bd595a05ae" (UID: "0d503143-f75b-40e6-b0e3-d1bd595a05ae"). InnerVolumeSpecName "kube-api-access-t6jsw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:15:36 crc kubenswrapper[5118]: I0121 00:15:36.589302 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d503143-f75b-40e6-b0e3-d1bd595a05ae-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "0d503143-f75b-40e6-b0e3-d1bd595a05ae" (UID: "0d503143-f75b-40e6-b0e3-d1bd595a05ae"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 00:15:36 crc kubenswrapper[5118]: I0121 00:15:36.590411 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d503143-f75b-40e6-b0e3-d1bd595a05ae-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "0d503143-f75b-40e6-b0e3-d1bd595a05ae" (UID: "0d503143-f75b-40e6-b0e3-d1bd595a05ae"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:15:36 crc kubenswrapper[5118]: I0121 00:15:36.592093 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d503143-f75b-40e6-b0e3-d1bd595a05ae-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "0d503143-f75b-40e6-b0e3-d1bd595a05ae" (UID: "0d503143-f75b-40e6-b0e3-d1bd595a05ae"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:15:36 crc kubenswrapper[5118]: I0121 00:15:36.592932 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (OuterVolumeSpecName: "registry-storage") pod "0d503143-f75b-40e6-b0e3-d1bd595a05ae" (UID: "0d503143-f75b-40e6-b0e3-d1bd595a05ae"). InnerVolumeSpecName "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2". PluginName "kubernetes.io/csi", VolumeGIDValue "" Jan 21 00:15:36 crc kubenswrapper[5118]: I0121 00:15:36.600319 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d503143-f75b-40e6-b0e3-d1bd595a05ae-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "0d503143-f75b-40e6-b0e3-d1bd595a05ae" (UID: "0d503143-f75b-40e6-b0e3-d1bd595a05ae"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 00:15:36 crc kubenswrapper[5118]: I0121 00:15:36.680217 5118 generic.go:358] "Generic (PLEG): container finished" podID="0d503143-f75b-40e6-b0e3-d1bd595a05ae" containerID="50af60c127f5188bd7aca1af9976285e835780921bf53a2f57465bd5da7f3a21" exitCode=0 Jan 21 00:15:36 crc kubenswrapper[5118]: I0121 00:15:36.680273 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-tlb84" Jan 21 00:15:36 crc kubenswrapper[5118]: I0121 00:15:36.680413 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-tlb84" event={"ID":"0d503143-f75b-40e6-b0e3-d1bd595a05ae","Type":"ContainerDied","Data":"50af60c127f5188bd7aca1af9976285e835780921bf53a2f57465bd5da7f3a21"} Jan 21 00:15:36 crc kubenswrapper[5118]: I0121 00:15:36.680508 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-tlb84" event={"ID":"0d503143-f75b-40e6-b0e3-d1bd595a05ae","Type":"ContainerDied","Data":"453e704f337a09cc1d6dd181cdc31ee5be695264ee553e6867ffe307fbfd48fc"} Jan 21 00:15:36 crc kubenswrapper[5118]: I0121 00:15:36.680537 5118 scope.go:117] "RemoveContainer" containerID="50af60c127f5188bd7aca1af9976285e835780921bf53a2f57465bd5da7f3a21" Jan 21 00:15:36 crc kubenswrapper[5118]: I0121 00:15:36.684903 5118 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0d503143-f75b-40e6-b0e3-d1bd595a05ae-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 21 00:15:36 crc kubenswrapper[5118]: I0121 00:15:36.684930 5118 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0d503143-f75b-40e6-b0e3-d1bd595a05ae-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 21 00:15:36 crc kubenswrapper[5118]: I0121 00:15:36.684940 5118 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0d503143-f75b-40e6-b0e3-d1bd595a05ae-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 21 00:15:36 crc kubenswrapper[5118]: I0121 00:15:36.684949 5118 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0d503143-f75b-40e6-b0e3-d1bd595a05ae-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 21 00:15:36 crc kubenswrapper[5118]: I0121 00:15:36.684957 5118 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0d503143-f75b-40e6-b0e3-d1bd595a05ae-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 00:15:36 crc kubenswrapper[5118]: I0121 00:15:36.684966 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-t6jsw\" (UniqueName: \"kubernetes.io/projected/0d503143-f75b-40e6-b0e3-d1bd595a05ae-kube-api-access-t6jsw\") on node \"crc\" DevicePath \"\"" Jan 21 00:15:36 crc kubenswrapper[5118]: I0121 00:15:36.684974 5118 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0d503143-f75b-40e6-b0e3-d1bd595a05ae-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 21 00:15:36 crc kubenswrapper[5118]: I0121 00:15:36.704453 5118 scope.go:117] "RemoveContainer" containerID="50af60c127f5188bd7aca1af9976285e835780921bf53a2f57465bd5da7f3a21" Jan 21 00:15:36 crc kubenswrapper[5118]: E0121 00:15:36.704885 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50af60c127f5188bd7aca1af9976285e835780921bf53a2f57465bd5da7f3a21\": container with ID starting with 50af60c127f5188bd7aca1af9976285e835780921bf53a2f57465bd5da7f3a21 not found: ID does not exist" containerID="50af60c127f5188bd7aca1af9976285e835780921bf53a2f57465bd5da7f3a21" Jan 21 00:15:36 crc kubenswrapper[5118]: I0121 00:15:36.704929 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50af60c127f5188bd7aca1af9976285e835780921bf53a2f57465bd5da7f3a21"} err="failed to get container status \"50af60c127f5188bd7aca1af9976285e835780921bf53a2f57465bd5da7f3a21\": rpc error: code = NotFound desc = could not find container \"50af60c127f5188bd7aca1af9976285e835780921bf53a2f57465bd5da7f3a21\": container with ID starting with 50af60c127f5188bd7aca1af9976285e835780921bf53a2f57465bd5da7f3a21 not found: ID does not exist" Jan 21 00:15:36 crc kubenswrapper[5118]: I0121 00:15:36.707208 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-tlb84"] Jan 21 00:15:36 crc kubenswrapper[5118]: I0121 00:15:36.717256 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-tlb84"] Jan 21 00:15:36 crc kubenswrapper[5118]: I0121 00:15:36.985934 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d503143-f75b-40e6-b0e3-d1bd595a05ae" path="/var/lib/kubelet/pods/0d503143-f75b-40e6-b0e3-d1bd595a05ae/volumes" Jan 21 00:16:00 crc kubenswrapper[5118]: I0121 00:16:00.150721 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29482576-pvc4n"] Jan 21 00:16:00 crc kubenswrapper[5118]: I0121 00:16:00.152534 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0d503143-f75b-40e6-b0e3-d1bd595a05ae" containerName="registry" Jan 21 00:16:00 crc kubenswrapper[5118]: I0121 00:16:00.152561 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d503143-f75b-40e6-b0e3-d1bd595a05ae" containerName="registry" Jan 21 00:16:00 crc kubenswrapper[5118]: I0121 00:16:00.152801 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="0d503143-f75b-40e6-b0e3-d1bd595a05ae" containerName="registry" Jan 21 00:16:00 crc kubenswrapper[5118]: I0121 00:16:00.159012 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482576-pvc4n" Jan 21 00:16:00 crc kubenswrapper[5118]: I0121 00:16:00.161146 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29482576-pvc4n"] Jan 21 00:16:00 crc kubenswrapper[5118]: I0121 00:16:00.162311 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 00:16:00 crc kubenswrapper[5118]: I0121 00:16:00.162674 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-tk7qc\"" Jan 21 00:16:00 crc kubenswrapper[5118]: I0121 00:16:00.162878 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 00:16:00 crc kubenswrapper[5118]: I0121 00:16:00.303435 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7k5tj\" (UniqueName: \"kubernetes.io/projected/2f1c0214-a0cf-41c8-b79e-c7d666c4d7ce-kube-api-access-7k5tj\") pod \"auto-csr-approver-29482576-pvc4n\" (UID: \"2f1c0214-a0cf-41c8-b79e-c7d666c4d7ce\") " pod="openshift-infra/auto-csr-approver-29482576-pvc4n" Jan 21 00:16:00 crc kubenswrapper[5118]: I0121 00:16:00.404806 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7k5tj\" (UniqueName: \"kubernetes.io/projected/2f1c0214-a0cf-41c8-b79e-c7d666c4d7ce-kube-api-access-7k5tj\") pod \"auto-csr-approver-29482576-pvc4n\" (UID: \"2f1c0214-a0cf-41c8-b79e-c7d666c4d7ce\") " pod="openshift-infra/auto-csr-approver-29482576-pvc4n" Jan 21 00:16:00 crc kubenswrapper[5118]: I0121 00:16:00.426771 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7k5tj\" (UniqueName: \"kubernetes.io/projected/2f1c0214-a0cf-41c8-b79e-c7d666c4d7ce-kube-api-access-7k5tj\") pod \"auto-csr-approver-29482576-pvc4n\" (UID: \"2f1c0214-a0cf-41c8-b79e-c7d666c4d7ce\") " pod="openshift-infra/auto-csr-approver-29482576-pvc4n" Jan 21 00:16:00 crc kubenswrapper[5118]: I0121 00:16:00.494670 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482576-pvc4n" Jan 21 00:16:00 crc kubenswrapper[5118]: I0121 00:16:00.767820 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29482576-pvc4n"] Jan 21 00:16:00 crc kubenswrapper[5118]: W0121 00:16:00.775509 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2f1c0214_a0cf_41c8_b79e_c7d666c4d7ce.slice/crio-a05e920256f82b0d0e3516a82da44b5e33a774971fe539d46baca43e94d79396 WatchSource:0}: Error finding container a05e920256f82b0d0e3516a82da44b5e33a774971fe539d46baca43e94d79396: Status 404 returned error can't find the container with id a05e920256f82b0d0e3516a82da44b5e33a774971fe539d46baca43e94d79396 Jan 21 00:16:00 crc kubenswrapper[5118]: I0121 00:16:00.854633 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29482576-pvc4n" event={"ID":"2f1c0214-a0cf-41c8-b79e-c7d666c4d7ce","Type":"ContainerStarted","Data":"a05e920256f82b0d0e3516a82da44b5e33a774971fe539d46baca43e94d79396"} Jan 21 00:16:04 crc kubenswrapper[5118]: I0121 00:16:04.409050 5118 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kubelet-serving" csr="csr-nm788" Jan 21 00:16:04 crc kubenswrapper[5118]: I0121 00:16:04.429719 5118 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kubelet-serving" csr="csr-nm788" Jan 21 00:16:04 crc kubenswrapper[5118]: I0121 00:16:04.880757 5118 generic.go:358] "Generic (PLEG): container finished" podID="2f1c0214-a0cf-41c8-b79e-c7d666c4d7ce" containerID="80d637fd8b1b3dd2344197a9b35b41fe213fdc203deef8260a1114dc44892e7b" exitCode=0 Jan 21 00:16:04 crc kubenswrapper[5118]: I0121 00:16:04.880884 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29482576-pvc4n" event={"ID":"2f1c0214-a0cf-41c8-b79e-c7d666c4d7ce","Type":"ContainerDied","Data":"80d637fd8b1b3dd2344197a9b35b41fe213fdc203deef8260a1114dc44892e7b"} Jan 21 00:16:05 crc kubenswrapper[5118]: I0121 00:16:05.431290 5118 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2026-02-20 00:11:04 +0000 UTC" deadline="2026-02-12 06:46:43.146899279 +0000 UTC" Jan 21 00:16:05 crc kubenswrapper[5118]: I0121 00:16:05.431720 5118 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="534h30m37.715184439s" Jan 21 00:16:06 crc kubenswrapper[5118]: I0121 00:16:06.232299 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482576-pvc4n" Jan 21 00:16:06 crc kubenswrapper[5118]: I0121 00:16:06.278125 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7k5tj\" (UniqueName: \"kubernetes.io/projected/2f1c0214-a0cf-41c8-b79e-c7d666c4d7ce-kube-api-access-7k5tj\") pod \"2f1c0214-a0cf-41c8-b79e-c7d666c4d7ce\" (UID: \"2f1c0214-a0cf-41c8-b79e-c7d666c4d7ce\") " Jan 21 00:16:06 crc kubenswrapper[5118]: I0121 00:16:06.286754 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f1c0214-a0cf-41c8-b79e-c7d666c4d7ce-kube-api-access-7k5tj" (OuterVolumeSpecName: "kube-api-access-7k5tj") pod "2f1c0214-a0cf-41c8-b79e-c7d666c4d7ce" (UID: "2f1c0214-a0cf-41c8-b79e-c7d666c4d7ce"). InnerVolumeSpecName "kube-api-access-7k5tj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:16:06 crc kubenswrapper[5118]: I0121 00:16:06.379789 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7k5tj\" (UniqueName: \"kubernetes.io/projected/2f1c0214-a0cf-41c8-b79e-c7d666c4d7ce-kube-api-access-7k5tj\") on node \"crc\" DevicePath \"\"" Jan 21 00:16:06 crc kubenswrapper[5118]: I0121 00:16:06.898704 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482576-pvc4n" Jan 21 00:16:06 crc kubenswrapper[5118]: I0121 00:16:06.898685 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29482576-pvc4n" event={"ID":"2f1c0214-a0cf-41c8-b79e-c7d666c4d7ce","Type":"ContainerDied","Data":"a05e920256f82b0d0e3516a82da44b5e33a774971fe539d46baca43e94d79396"} Jan 21 00:16:06 crc kubenswrapper[5118]: I0121 00:16:06.898883 5118 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a05e920256f82b0d0e3516a82da44b5e33a774971fe539d46baca43e94d79396" Jan 21 00:16:33 crc kubenswrapper[5118]: I0121 00:16:33.801235 5118 patch_prober.go:28] interesting pod/machine-config-daemon-22r9n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 00:16:33 crc kubenswrapper[5118]: I0121 00:16:33.801885 5118 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 00:17:03 crc kubenswrapper[5118]: I0121 00:17:03.800514 5118 patch_prober.go:28] interesting pod/machine-config-daemon-22r9n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 00:17:03 crc kubenswrapper[5118]: I0121 00:17:03.802336 5118 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 00:17:33 crc kubenswrapper[5118]: I0121 00:17:33.800507 5118 patch_prober.go:28] interesting pod/machine-config-daemon-22r9n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 00:17:33 crc kubenswrapper[5118]: I0121 00:17:33.801323 5118 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 00:17:33 crc kubenswrapper[5118]: I0121 00:17:33.801378 5118 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" Jan 21 00:17:33 crc kubenswrapper[5118]: I0121 00:17:33.801926 5118 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"92f94cff427bbfd2ea80a4772b8465005fc945125ca4b7e3c490d52f65cdb761"} pod="openshift-machine-config-operator/machine-config-daemon-22r9n" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 00:17:33 crc kubenswrapper[5118]: I0121 00:17:33.801982 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" containerName="machine-config-daemon" containerID="cri-o://92f94cff427bbfd2ea80a4772b8465005fc945125ca4b7e3c490d52f65cdb761" gracePeriod=600 Jan 21 00:17:34 crc kubenswrapper[5118]: I0121 00:17:34.507662 5118 generic.go:358] "Generic (PLEG): container finished" podID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" containerID="92f94cff427bbfd2ea80a4772b8465005fc945125ca4b7e3c490d52f65cdb761" exitCode=0 Jan 21 00:17:34 crc kubenswrapper[5118]: I0121 00:17:34.507732 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" event={"ID":"44eb9bc7-60a3-421c-bf5e-d1d9a5026435","Type":"ContainerDied","Data":"92f94cff427bbfd2ea80a4772b8465005fc945125ca4b7e3c490d52f65cdb761"} Jan 21 00:17:34 crc kubenswrapper[5118]: I0121 00:17:34.508358 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" event={"ID":"44eb9bc7-60a3-421c-bf5e-d1d9a5026435","Type":"ContainerStarted","Data":"7922e95afa9e80095c69f7b0a751dd320865224ec2831af4c9a2dcde9659cd54"} Jan 21 00:17:34 crc kubenswrapper[5118]: I0121 00:17:34.508427 5118 scope.go:117] "RemoveContainer" containerID="ebce512679b1ac6a1172cf6df51d1cdffd5fd6e643bd11e70ffe7482570cd359" Jan 21 00:18:00 crc kubenswrapper[5118]: I0121 00:18:00.153971 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29482578-ts7d8"] Jan 21 00:18:00 crc kubenswrapper[5118]: I0121 00:18:00.155540 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2f1c0214-a0cf-41c8-b79e-c7d666c4d7ce" containerName="oc" Jan 21 00:18:00 crc kubenswrapper[5118]: I0121 00:18:00.155557 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f1c0214-a0cf-41c8-b79e-c7d666c4d7ce" containerName="oc" Jan 21 00:18:00 crc kubenswrapper[5118]: I0121 00:18:00.155671 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="2f1c0214-a0cf-41c8-b79e-c7d666c4d7ce" containerName="oc" Jan 21 00:18:00 crc kubenswrapper[5118]: I0121 00:18:00.171851 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29482578-ts7d8"] Jan 21 00:18:00 crc kubenswrapper[5118]: I0121 00:18:00.172069 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482578-ts7d8" Jan 21 00:18:00 crc kubenswrapper[5118]: I0121 00:18:00.174106 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 00:18:00 crc kubenswrapper[5118]: I0121 00:18:00.174770 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-tk7qc\"" Jan 21 00:18:00 crc kubenswrapper[5118]: I0121 00:18:00.174882 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 00:18:00 crc kubenswrapper[5118]: I0121 00:18:00.296286 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tc7s6\" (UniqueName: \"kubernetes.io/projected/bc699375-1fce-467a-a767-ec49bc9bf989-kube-api-access-tc7s6\") pod \"auto-csr-approver-29482578-ts7d8\" (UID: \"bc699375-1fce-467a-a767-ec49bc9bf989\") " pod="openshift-infra/auto-csr-approver-29482578-ts7d8" Jan 21 00:18:00 crc kubenswrapper[5118]: I0121 00:18:00.397991 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tc7s6\" (UniqueName: \"kubernetes.io/projected/bc699375-1fce-467a-a767-ec49bc9bf989-kube-api-access-tc7s6\") pod \"auto-csr-approver-29482578-ts7d8\" (UID: \"bc699375-1fce-467a-a767-ec49bc9bf989\") " pod="openshift-infra/auto-csr-approver-29482578-ts7d8" Jan 21 00:18:00 crc kubenswrapper[5118]: I0121 00:18:00.420481 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tc7s6\" (UniqueName: \"kubernetes.io/projected/bc699375-1fce-467a-a767-ec49bc9bf989-kube-api-access-tc7s6\") pod \"auto-csr-approver-29482578-ts7d8\" (UID: \"bc699375-1fce-467a-a767-ec49bc9bf989\") " pod="openshift-infra/auto-csr-approver-29482578-ts7d8" Jan 21 00:18:00 crc kubenswrapper[5118]: I0121 00:18:00.509386 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482578-ts7d8" Jan 21 00:18:00 crc kubenswrapper[5118]: I0121 00:18:00.704090 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29482578-ts7d8"] Jan 21 00:18:01 crc kubenswrapper[5118]: I0121 00:18:01.691954 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29482578-ts7d8" event={"ID":"bc699375-1fce-467a-a767-ec49bc9bf989","Type":"ContainerStarted","Data":"47348211f09e3ab7aba0805ceb6b17fc9d2e7e9c9b0b4115b241e67df5f9909c"} Jan 21 00:18:02 crc kubenswrapper[5118]: I0121 00:18:02.701075 5118 generic.go:358] "Generic (PLEG): container finished" podID="bc699375-1fce-467a-a767-ec49bc9bf989" containerID="d22738ae34a46dfe49021bd3762832f99742cb30853bddfdb2566d7bc46129e9" exitCode=0 Jan 21 00:18:02 crc kubenswrapper[5118]: I0121 00:18:02.701632 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29482578-ts7d8" event={"ID":"bc699375-1fce-467a-a767-ec49bc9bf989","Type":"ContainerDied","Data":"d22738ae34a46dfe49021bd3762832f99742cb30853bddfdb2566d7bc46129e9"} Jan 21 00:18:03 crc kubenswrapper[5118]: I0121 00:18:03.931387 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482578-ts7d8" Jan 21 00:18:03 crc kubenswrapper[5118]: I0121 00:18:03.947886 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tc7s6\" (UniqueName: \"kubernetes.io/projected/bc699375-1fce-467a-a767-ec49bc9bf989-kube-api-access-tc7s6\") pod \"bc699375-1fce-467a-a767-ec49bc9bf989\" (UID: \"bc699375-1fce-467a-a767-ec49bc9bf989\") " Jan 21 00:18:03 crc kubenswrapper[5118]: I0121 00:18:03.953577 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc699375-1fce-467a-a767-ec49bc9bf989-kube-api-access-tc7s6" (OuterVolumeSpecName: "kube-api-access-tc7s6") pod "bc699375-1fce-467a-a767-ec49bc9bf989" (UID: "bc699375-1fce-467a-a767-ec49bc9bf989"). InnerVolumeSpecName "kube-api-access-tc7s6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:18:04 crc kubenswrapper[5118]: I0121 00:18:04.048834 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tc7s6\" (UniqueName: \"kubernetes.io/projected/bc699375-1fce-467a-a767-ec49bc9bf989-kube-api-access-tc7s6\") on node \"crc\" DevicePath \"\"" Jan 21 00:18:04 crc kubenswrapper[5118]: I0121 00:18:04.713850 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482578-ts7d8" Jan 21 00:18:04 crc kubenswrapper[5118]: I0121 00:18:04.713884 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29482578-ts7d8" event={"ID":"bc699375-1fce-467a-a767-ec49bc9bf989","Type":"ContainerDied","Data":"47348211f09e3ab7aba0805ceb6b17fc9d2e7e9c9b0b4115b241e67df5f9909c"} Jan 21 00:18:04 crc kubenswrapper[5118]: I0121 00:18:04.713919 5118 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="47348211f09e3ab7aba0805ceb6b17fc9d2e7e9c9b0b4115b241e67df5f9909c" Jan 21 00:19:05 crc kubenswrapper[5118]: I0121 00:19:05.162094 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 21 00:19:05 crc kubenswrapper[5118]: I0121 00:19:05.163109 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 21 00:19:45 crc kubenswrapper[5118]: I0121 00:19:45.633746 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kzdr6"] Jan 21 00:19:45 crc kubenswrapper[5118]: I0121 00:19:45.634597 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kzdr6" podUID="ddc3c284-5d85-4e40-b285-f16062ad8d9c" containerName="kube-rbac-proxy" containerID="cri-o://9637b49ba947d6911df74f39bce845599b33208468341e274f87b282be511959" gracePeriod=30 Jan 21 00:19:45 crc kubenswrapper[5118]: I0121 00:19:45.634953 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kzdr6" podUID="ddc3c284-5d85-4e40-b285-f16062ad8d9c" containerName="ovnkube-cluster-manager" containerID="cri-o://65392a9879c2e02a5dbcb62a596d34645ad85b9b7b3896c61bca86c58f88d828" gracePeriod=30 Jan 21 00:19:45 crc kubenswrapper[5118]: I0121 00:19:45.846591 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-h8fs2"] Jan 21 00:19:45 crc kubenswrapper[5118]: I0121 00:19:45.847528 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-h8fs2" podUID="91e46657-55ca-43e7-9a43-6bb875c7debf" containerName="ovn-controller" containerID="cri-o://3752a9e4c7c738fecf3e21570f930846192609fc9b64b76e9895d30662f15407" gracePeriod=30 Jan 21 00:19:45 crc kubenswrapper[5118]: I0121 00:19:45.847655 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-h8fs2" podUID="91e46657-55ca-43e7-9a43-6bb875c7debf" containerName="nbdb" containerID="cri-o://e0443ae711ad7be79ef581edaa3a7e5dc22631f021e1f4072d61e2705eaa2f1e" gracePeriod=30 Jan 21 00:19:45 crc kubenswrapper[5118]: I0121 00:19:45.847925 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-h8fs2" podUID="91e46657-55ca-43e7-9a43-6bb875c7debf" containerName="sbdb" containerID="cri-o://47304a559d2c42615011fabfa78530d881d2f5b7e75c9f0d869341e77027c9d9" gracePeriod=30 Jan 21 00:19:45 crc kubenswrapper[5118]: I0121 00:19:45.848026 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-h8fs2" podUID="91e46657-55ca-43e7-9a43-6bb875c7debf" containerName="kube-rbac-proxy-node" containerID="cri-o://44654afa2c6f84f6cc69686c1499e120914b6742b725292990c497aa516ae5e3" gracePeriod=30 Jan 21 00:19:45 crc kubenswrapper[5118]: I0121 00:19:45.848090 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-h8fs2" podUID="91e46657-55ca-43e7-9a43-6bb875c7debf" containerName="northd" containerID="cri-o://1d172206bbe6a689c32082da3fb77a7d564d2a522a9cc4b7d70193aebfc326a1" gracePeriod=30 Jan 21 00:19:45 crc kubenswrapper[5118]: I0121 00:19:45.848169 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-h8fs2" podUID="91e46657-55ca-43e7-9a43-6bb875c7debf" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://6cec5a44434a07153f1d3e9910d8fd3d9a2ec37767604df403adafe3ad6eaf2f" gracePeriod=30 Jan 21 00:19:45 crc kubenswrapper[5118]: I0121 00:19:45.848243 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-h8fs2" podUID="91e46657-55ca-43e7-9a43-6bb875c7debf" containerName="ovn-acl-logging" containerID="cri-o://5bb9b3c5f6f84029c1079215b2b2e783858fbe9d0ffff553ce9c8357e054c961" gracePeriod=30 Jan 21 00:19:45 crc kubenswrapper[5118]: I0121 00:19:45.856206 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kzdr6" Jan 21 00:19:45 crc kubenswrapper[5118]: I0121 00:19:45.879334 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-h8fs2" podUID="91e46657-55ca-43e7-9a43-6bb875c7debf" containerName="ovnkube-controller" containerID="cri-o://cd6b0e74901b77629ea9cf2a9f6ed14b9dadab5da55771b2c4329c542b5dde1a" gracePeriod=30 Jan 21 00:19:45 crc kubenswrapper[5118]: I0121 00:19:45.883135 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-grht9"] Jan 21 00:19:45 crc kubenswrapper[5118]: I0121 00:19:45.883700 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ddc3c284-5d85-4e40-b285-f16062ad8d9c" containerName="kube-rbac-proxy" Jan 21 00:19:45 crc kubenswrapper[5118]: I0121 00:19:45.883717 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddc3c284-5d85-4e40-b285-f16062ad8d9c" containerName="kube-rbac-proxy" Jan 21 00:19:45 crc kubenswrapper[5118]: I0121 00:19:45.883727 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="bc699375-1fce-467a-a767-ec49bc9bf989" containerName="oc" Jan 21 00:19:45 crc kubenswrapper[5118]: I0121 00:19:45.883732 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc699375-1fce-467a-a767-ec49bc9bf989" containerName="oc" Jan 21 00:19:45 crc kubenswrapper[5118]: I0121 00:19:45.883752 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ddc3c284-5d85-4e40-b285-f16062ad8d9c" containerName="ovnkube-cluster-manager" Jan 21 00:19:45 crc kubenswrapper[5118]: I0121 00:19:45.883758 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddc3c284-5d85-4e40-b285-f16062ad8d9c" containerName="ovnkube-cluster-manager" Jan 21 00:19:45 crc kubenswrapper[5118]: I0121 00:19:45.883843 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="bc699375-1fce-467a-a767-ec49bc9bf989" containerName="oc" Jan 21 00:19:45 crc kubenswrapper[5118]: I0121 00:19:45.883851 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="ddc3c284-5d85-4e40-b285-f16062ad8d9c" containerName="ovnkube-cluster-manager" Jan 21 00:19:45 crc kubenswrapper[5118]: I0121 00:19:45.883861 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="ddc3c284-5d85-4e40-b285-f16062ad8d9c" containerName="kube-rbac-proxy" Jan 21 00:19:45 crc kubenswrapper[5118]: I0121 00:19:45.932881 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-grht9" Jan 21 00:19:45 crc kubenswrapper[5118]: I0121 00:19:45.973081 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fzdws\" (UniqueName: \"kubernetes.io/projected/ddc3c284-5d85-4e40-b285-f16062ad8d9c-kube-api-access-fzdws\") pod \"ddc3c284-5d85-4e40-b285-f16062ad8d9c\" (UID: \"ddc3c284-5d85-4e40-b285-f16062ad8d9c\") " Jan 21 00:19:45 crc kubenswrapper[5118]: I0121 00:19:45.973179 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ddc3c284-5d85-4e40-b285-f16062ad8d9c-env-overrides\") pod \"ddc3c284-5d85-4e40-b285-f16062ad8d9c\" (UID: \"ddc3c284-5d85-4e40-b285-f16062ad8d9c\") " Jan 21 00:19:45 crc kubenswrapper[5118]: I0121 00:19:45.973236 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ddc3c284-5d85-4e40-b285-f16062ad8d9c-ovn-control-plane-metrics-cert\") pod \"ddc3c284-5d85-4e40-b285-f16062ad8d9c\" (UID: \"ddc3c284-5d85-4e40-b285-f16062ad8d9c\") " Jan 21 00:19:45 crc kubenswrapper[5118]: I0121 00:19:45.973276 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ddc3c284-5d85-4e40-b285-f16062ad8d9c-ovnkube-config\") pod \"ddc3c284-5d85-4e40-b285-f16062ad8d9c\" (UID: \"ddc3c284-5d85-4e40-b285-f16062ad8d9c\") " Jan 21 00:19:45 crc kubenswrapper[5118]: I0121 00:19:45.973901 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ddc3c284-5d85-4e40-b285-f16062ad8d9c-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "ddc3c284-5d85-4e40-b285-f16062ad8d9c" (UID: "ddc3c284-5d85-4e40-b285-f16062ad8d9c"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 00:19:45 crc kubenswrapper[5118]: I0121 00:19:45.973909 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ddc3c284-5d85-4e40-b285-f16062ad8d9c-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "ddc3c284-5d85-4e40-b285-f16062ad8d9c" (UID: "ddc3c284-5d85-4e40-b285-f16062ad8d9c"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 00:19:45 crc kubenswrapper[5118]: I0121 00:19:45.979207 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ddc3c284-5d85-4e40-b285-f16062ad8d9c-kube-api-access-fzdws" (OuterVolumeSpecName: "kube-api-access-fzdws") pod "ddc3c284-5d85-4e40-b285-f16062ad8d9c" (UID: "ddc3c284-5d85-4e40-b285-f16062ad8d9c"). InnerVolumeSpecName "kube-api-access-fzdws". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:19:45 crc kubenswrapper[5118]: I0121 00:19:45.979259 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddc3c284-5d85-4e40-b285-f16062ad8d9c-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "ddc3c284-5d85-4e40-b285-f16062ad8d9c" (UID: "ddc3c284-5d85-4e40-b285-f16062ad8d9c"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.074999 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4911732e-00d3-4732-bee3-18f866800bde-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-grht9\" (UID: \"4911732e-00d3-4732-bee3-18f866800bde\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-grht9" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.075132 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4911732e-00d3-4732-bee3-18f866800bde-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-grht9\" (UID: \"4911732e-00d3-4732-bee3-18f866800bde\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-grht9" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.075990 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4911732e-00d3-4732-bee3-18f866800bde-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-grht9\" (UID: \"4911732e-00d3-4732-bee3-18f866800bde\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-grht9" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.076066 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-958qt\" (UniqueName: \"kubernetes.io/projected/4911732e-00d3-4732-bee3-18f866800bde-kube-api-access-958qt\") pod \"ovnkube-control-plane-97c9b6c48-grht9\" (UID: \"4911732e-00d3-4732-bee3-18f866800bde\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-grht9" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.076144 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fzdws\" (UniqueName: \"kubernetes.io/projected/ddc3c284-5d85-4e40-b285-f16062ad8d9c-kube-api-access-fzdws\") on node \"crc\" DevicePath \"\"" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.076205 5118 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ddc3c284-5d85-4e40-b285-f16062ad8d9c-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.076219 5118 reconciler_common.go:299] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ddc3c284-5d85-4e40-b285-f16062ad8d9c-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.076232 5118 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ddc3c284-5d85-4e40-b285-f16062ad8d9c-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.176682 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4911732e-00d3-4732-bee3-18f866800bde-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-grht9\" (UID: \"4911732e-00d3-4732-bee3-18f866800bde\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-grht9" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.176742 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4911732e-00d3-4732-bee3-18f866800bde-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-grht9\" (UID: \"4911732e-00d3-4732-bee3-18f866800bde\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-grht9" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.176792 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-958qt\" (UniqueName: \"kubernetes.io/projected/4911732e-00d3-4732-bee3-18f866800bde-kube-api-access-958qt\") pod \"ovnkube-control-plane-97c9b6c48-grht9\" (UID: \"4911732e-00d3-4732-bee3-18f866800bde\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-grht9" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.176841 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4911732e-00d3-4732-bee3-18f866800bde-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-grht9\" (UID: \"4911732e-00d3-4732-bee3-18f866800bde\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-grht9" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.177550 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4911732e-00d3-4732-bee3-18f866800bde-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-grht9\" (UID: \"4911732e-00d3-4732-bee3-18f866800bde\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-grht9" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.177548 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4911732e-00d3-4732-bee3-18f866800bde-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-grht9\" (UID: \"4911732e-00d3-4732-bee3-18f866800bde\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-grht9" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.184959 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4911732e-00d3-4732-bee3-18f866800bde-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-grht9\" (UID: \"4911732e-00d3-4732-bee3-18f866800bde\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-grht9" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.194774 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-958qt\" (UniqueName: \"kubernetes.io/projected/4911732e-00d3-4732-bee3-18f866800bde-kube-api-access-958qt\") pod \"ovnkube-control-plane-97c9b6c48-grht9\" (UID: \"4911732e-00d3-4732-bee3-18f866800bde\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-grht9" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.322971 5118 generic.go:358] "Generic (PLEG): container finished" podID="ddc3c284-5d85-4e40-b285-f16062ad8d9c" containerID="65392a9879c2e02a5dbcb62a596d34645ad85b9b7b3896c61bca86c58f88d828" exitCode=0 Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.323199 5118 generic.go:358] "Generic (PLEG): container finished" podID="ddc3c284-5d85-4e40-b285-f16062ad8d9c" containerID="9637b49ba947d6911df74f39bce845599b33208468341e274f87b282be511959" exitCode=0 Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.323112 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kzdr6" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.323025 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kzdr6" event={"ID":"ddc3c284-5d85-4e40-b285-f16062ad8d9c","Type":"ContainerDied","Data":"65392a9879c2e02a5dbcb62a596d34645ad85b9b7b3896c61bca86c58f88d828"} Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.323758 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kzdr6" event={"ID":"ddc3c284-5d85-4e40-b285-f16062ad8d9c","Type":"ContainerDied","Data":"9637b49ba947d6911df74f39bce845599b33208468341e274f87b282be511959"} Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.323812 5118 scope.go:117] "RemoveContainer" containerID="65392a9879c2e02a5dbcb62a596d34645ad85b9b7b3896c61bca86c58f88d828" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.323868 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kzdr6" event={"ID":"ddc3c284-5d85-4e40-b285-f16062ad8d9c","Type":"ContainerDied","Data":"571babcd7c15278c84d993cda54ba05119616b4820b1104f1e0cd4bc0b5e5b9d"} Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.325764 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qcqwq_7c0390f5-26b4-4299-958c-acac058be619/kube-multus/0.log" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.325823 5118 generic.go:358] "Generic (PLEG): container finished" podID="7c0390f5-26b4-4299-958c-acac058be619" containerID="a76c675001b1e3a4e3d344ae261bddc8ead10e9d0619b5012a61c50027134efe" exitCode=2 Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.325943 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-qcqwq" event={"ID":"7c0390f5-26b4-4299-958c-acac058be619","Type":"ContainerDied","Data":"a76c675001b1e3a4e3d344ae261bddc8ead10e9d0619b5012a61c50027134efe"} Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.326646 5118 scope.go:117] "RemoveContainer" containerID="a76c675001b1e3a4e3d344ae261bddc8ead10e9d0619b5012a61c50027134efe" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.327911 5118 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.332239 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-grht9" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.335759 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-h8fs2_91e46657-55ca-43e7-9a43-6bb875c7debf/ovn-acl-logging/0.log" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.336125 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-h8fs2_91e46657-55ca-43e7-9a43-6bb875c7debf/ovn-controller/0.log" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.336551 5118 generic.go:358] "Generic (PLEG): container finished" podID="91e46657-55ca-43e7-9a43-6bb875c7debf" containerID="e0443ae711ad7be79ef581edaa3a7e5dc22631f021e1f4072d61e2705eaa2f1e" exitCode=0 Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.336631 5118 generic.go:358] "Generic (PLEG): container finished" podID="91e46657-55ca-43e7-9a43-6bb875c7debf" containerID="6cec5a44434a07153f1d3e9910d8fd3d9a2ec37767604df403adafe3ad6eaf2f" exitCode=0 Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.336684 5118 generic.go:358] "Generic (PLEG): container finished" podID="91e46657-55ca-43e7-9a43-6bb875c7debf" containerID="44654afa2c6f84f6cc69686c1499e120914b6742b725292990c497aa516ae5e3" exitCode=0 Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.336750 5118 generic.go:358] "Generic (PLEG): container finished" podID="91e46657-55ca-43e7-9a43-6bb875c7debf" containerID="5bb9b3c5f6f84029c1079215b2b2e783858fbe9d0ffff553ce9c8357e054c961" exitCode=143 Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.336804 5118 generic.go:358] "Generic (PLEG): container finished" podID="91e46657-55ca-43e7-9a43-6bb875c7debf" containerID="3752a9e4c7c738fecf3e21570f930846192609fc9b64b76e9895d30662f15407" exitCode=143 Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.336729 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h8fs2" event={"ID":"91e46657-55ca-43e7-9a43-6bb875c7debf","Type":"ContainerDied","Data":"e0443ae711ad7be79ef581edaa3a7e5dc22631f021e1f4072d61e2705eaa2f1e"} Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.336960 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h8fs2" event={"ID":"91e46657-55ca-43e7-9a43-6bb875c7debf","Type":"ContainerDied","Data":"6cec5a44434a07153f1d3e9910d8fd3d9a2ec37767604df403adafe3ad6eaf2f"} Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.337056 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h8fs2" event={"ID":"91e46657-55ca-43e7-9a43-6bb875c7debf","Type":"ContainerDied","Data":"44654afa2c6f84f6cc69686c1499e120914b6742b725292990c497aa516ae5e3"} Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.337113 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h8fs2" event={"ID":"91e46657-55ca-43e7-9a43-6bb875c7debf","Type":"ContainerDied","Data":"5bb9b3c5f6f84029c1079215b2b2e783858fbe9d0ffff553ce9c8357e054c961"} Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.337207 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h8fs2" event={"ID":"91e46657-55ca-43e7-9a43-6bb875c7debf","Type":"ContainerDied","Data":"3752a9e4c7c738fecf3e21570f930846192609fc9b64b76e9895d30662f15407"} Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.342898 5118 scope.go:117] "RemoveContainer" containerID="9637b49ba947d6911df74f39bce845599b33208468341e274f87b282be511959" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.360912 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kzdr6"] Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.364416 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kzdr6"] Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.365528 5118 scope.go:117] "RemoveContainer" containerID="65392a9879c2e02a5dbcb62a596d34645ad85b9b7b3896c61bca86c58f88d828" Jan 21 00:19:46 crc kubenswrapper[5118]: E0121 00:19:46.365943 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65392a9879c2e02a5dbcb62a596d34645ad85b9b7b3896c61bca86c58f88d828\": container with ID starting with 65392a9879c2e02a5dbcb62a596d34645ad85b9b7b3896c61bca86c58f88d828 not found: ID does not exist" containerID="65392a9879c2e02a5dbcb62a596d34645ad85b9b7b3896c61bca86c58f88d828" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.365973 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65392a9879c2e02a5dbcb62a596d34645ad85b9b7b3896c61bca86c58f88d828"} err="failed to get container status \"65392a9879c2e02a5dbcb62a596d34645ad85b9b7b3896c61bca86c58f88d828\": rpc error: code = NotFound desc = could not find container \"65392a9879c2e02a5dbcb62a596d34645ad85b9b7b3896c61bca86c58f88d828\": container with ID starting with 65392a9879c2e02a5dbcb62a596d34645ad85b9b7b3896c61bca86c58f88d828 not found: ID does not exist" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.365994 5118 scope.go:117] "RemoveContainer" containerID="9637b49ba947d6911df74f39bce845599b33208468341e274f87b282be511959" Jan 21 00:19:46 crc kubenswrapper[5118]: E0121 00:19:46.366389 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9637b49ba947d6911df74f39bce845599b33208468341e274f87b282be511959\": container with ID starting with 9637b49ba947d6911df74f39bce845599b33208468341e274f87b282be511959 not found: ID does not exist" containerID="9637b49ba947d6911df74f39bce845599b33208468341e274f87b282be511959" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.366417 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9637b49ba947d6911df74f39bce845599b33208468341e274f87b282be511959"} err="failed to get container status \"9637b49ba947d6911df74f39bce845599b33208468341e274f87b282be511959\": rpc error: code = NotFound desc = could not find container \"9637b49ba947d6911df74f39bce845599b33208468341e274f87b282be511959\": container with ID starting with 9637b49ba947d6911df74f39bce845599b33208468341e274f87b282be511959 not found: ID does not exist" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.366441 5118 scope.go:117] "RemoveContainer" containerID="65392a9879c2e02a5dbcb62a596d34645ad85b9b7b3896c61bca86c58f88d828" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.366673 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65392a9879c2e02a5dbcb62a596d34645ad85b9b7b3896c61bca86c58f88d828"} err="failed to get container status \"65392a9879c2e02a5dbcb62a596d34645ad85b9b7b3896c61bca86c58f88d828\": rpc error: code = NotFound desc = could not find container \"65392a9879c2e02a5dbcb62a596d34645ad85b9b7b3896c61bca86c58f88d828\": container with ID starting with 65392a9879c2e02a5dbcb62a596d34645ad85b9b7b3896c61bca86c58f88d828 not found: ID does not exist" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.366700 5118 scope.go:117] "RemoveContainer" containerID="9637b49ba947d6911df74f39bce845599b33208468341e274f87b282be511959" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.366902 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9637b49ba947d6911df74f39bce845599b33208468341e274f87b282be511959"} err="failed to get container status \"9637b49ba947d6911df74f39bce845599b33208468341e274f87b282be511959\": rpc error: code = NotFound desc = could not find container \"9637b49ba947d6911df74f39bce845599b33208468341e274f87b282be511959\": container with ID starting with 9637b49ba947d6911df74f39bce845599b33208468341e274f87b282be511959 not found: ID does not exist" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.589572 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-h8fs2_91e46657-55ca-43e7-9a43-6bb875c7debf/ovn-acl-logging/0.log" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.590360 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-h8fs2_91e46657-55ca-43e7-9a43-6bb875c7debf/ovn-controller/0.log" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.590736 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-h8fs2" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.648716 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-slcxb"] Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.650461 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="91e46657-55ca-43e7-9a43-6bb875c7debf" containerName="northd" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.650564 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="91e46657-55ca-43e7-9a43-6bb875c7debf" containerName="northd" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.650633 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="91e46657-55ca-43e7-9a43-6bb875c7debf" containerName="sbdb" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.650685 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="91e46657-55ca-43e7-9a43-6bb875c7debf" containerName="sbdb" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.650737 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="91e46657-55ca-43e7-9a43-6bb875c7debf" containerName="kube-rbac-proxy-ovn-metrics" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.650788 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="91e46657-55ca-43e7-9a43-6bb875c7debf" containerName="kube-rbac-proxy-ovn-metrics" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.650860 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="91e46657-55ca-43e7-9a43-6bb875c7debf" containerName="ovnkube-controller" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.650921 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="91e46657-55ca-43e7-9a43-6bb875c7debf" containerName="ovnkube-controller" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.651011 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="91e46657-55ca-43e7-9a43-6bb875c7debf" containerName="ovn-acl-logging" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.651077 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="91e46657-55ca-43e7-9a43-6bb875c7debf" containerName="ovn-acl-logging" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.651151 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="91e46657-55ca-43e7-9a43-6bb875c7debf" containerName="kubecfg-setup" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.651251 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="91e46657-55ca-43e7-9a43-6bb875c7debf" containerName="kubecfg-setup" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.651313 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="91e46657-55ca-43e7-9a43-6bb875c7debf" containerName="ovn-controller" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.651359 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="91e46657-55ca-43e7-9a43-6bb875c7debf" containerName="ovn-controller" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.651415 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="91e46657-55ca-43e7-9a43-6bb875c7debf" containerName="nbdb" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.651470 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="91e46657-55ca-43e7-9a43-6bb875c7debf" containerName="nbdb" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.651519 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="91e46657-55ca-43e7-9a43-6bb875c7debf" containerName="kube-rbac-proxy-node" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.651563 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="91e46657-55ca-43e7-9a43-6bb875c7debf" containerName="kube-rbac-proxy-node" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.651699 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="91e46657-55ca-43e7-9a43-6bb875c7debf" containerName="nbdb" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.651758 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="91e46657-55ca-43e7-9a43-6bb875c7debf" containerName="ovn-controller" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.651807 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="91e46657-55ca-43e7-9a43-6bb875c7debf" containerName="kube-rbac-proxy-ovn-metrics" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.651858 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="91e46657-55ca-43e7-9a43-6bb875c7debf" containerName="sbdb" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.651912 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="91e46657-55ca-43e7-9a43-6bb875c7debf" containerName="ovnkube-controller" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.651964 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="91e46657-55ca-43e7-9a43-6bb875c7debf" containerName="kube-rbac-proxy-node" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.652014 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="91e46657-55ca-43e7-9a43-6bb875c7debf" containerName="northd" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.652067 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="91e46657-55ca-43e7-9a43-6bb875c7debf" containerName="ovn-acl-logging" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.657584 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-slcxb" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.684201 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/91e46657-55ca-43e7-9a43-6bb875c7debf-run-ovn\") pod \"91e46657-55ca-43e7-9a43-6bb875c7debf\" (UID: \"91e46657-55ca-43e7-9a43-6bb875c7debf\") " Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.684324 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/91e46657-55ca-43e7-9a43-6bb875c7debf-host-slash\") pod \"91e46657-55ca-43e7-9a43-6bb875c7debf\" (UID: \"91e46657-55ca-43e7-9a43-6bb875c7debf\") " Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.684407 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/91e46657-55ca-43e7-9a43-6bb875c7debf-ovnkube-config\") pod \"91e46657-55ca-43e7-9a43-6bb875c7debf\" (UID: \"91e46657-55ca-43e7-9a43-6bb875c7debf\") " Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.684456 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/91e46657-55ca-43e7-9a43-6bb875c7debf-etc-openvswitch\") pod \"91e46657-55ca-43e7-9a43-6bb875c7debf\" (UID: \"91e46657-55ca-43e7-9a43-6bb875c7debf\") " Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.684531 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/91e46657-55ca-43e7-9a43-6bb875c7debf-ovn-node-metrics-cert\") pod \"91e46657-55ca-43e7-9a43-6bb875c7debf\" (UID: \"91e46657-55ca-43e7-9a43-6bb875c7debf\") " Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.684552 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/91e46657-55ca-43e7-9a43-6bb875c7debf-run-systemd\") pod \"91e46657-55ca-43e7-9a43-6bb875c7debf\" (UID: \"91e46657-55ca-43e7-9a43-6bb875c7debf\") " Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.684593 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/91e46657-55ca-43e7-9a43-6bb875c7debf-host-cni-bin\") pod \"91e46657-55ca-43e7-9a43-6bb875c7debf\" (UID: \"91e46657-55ca-43e7-9a43-6bb875c7debf\") " Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.684631 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/91e46657-55ca-43e7-9a43-6bb875c7debf-ovnkube-script-lib\") pod \"91e46657-55ca-43e7-9a43-6bb875c7debf\" (UID: \"91e46657-55ca-43e7-9a43-6bb875c7debf\") " Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.684657 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/91e46657-55ca-43e7-9a43-6bb875c7debf-log-socket\") pod \"91e46657-55ca-43e7-9a43-6bb875c7debf\" (UID: \"91e46657-55ca-43e7-9a43-6bb875c7debf\") " Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.684728 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/91e46657-55ca-43e7-9a43-6bb875c7debf-systemd-units\") pod \"91e46657-55ca-43e7-9a43-6bb875c7debf\" (UID: \"91e46657-55ca-43e7-9a43-6bb875c7debf\") " Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.684749 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/91e46657-55ca-43e7-9a43-6bb875c7debf-env-overrides\") pod \"91e46657-55ca-43e7-9a43-6bb875c7debf\" (UID: \"91e46657-55ca-43e7-9a43-6bb875c7debf\") " Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.684771 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/91e46657-55ca-43e7-9a43-6bb875c7debf-host-run-netns\") pod \"91e46657-55ca-43e7-9a43-6bb875c7debf\" (UID: \"91e46657-55ca-43e7-9a43-6bb875c7debf\") " Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.684790 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/91e46657-55ca-43e7-9a43-6bb875c7debf-host-var-lib-cni-networks-ovn-kubernetes\") pod \"91e46657-55ca-43e7-9a43-6bb875c7debf\" (UID: \"91e46657-55ca-43e7-9a43-6bb875c7debf\") " Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.684820 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/91e46657-55ca-43e7-9a43-6bb875c7debf-run-openvswitch\") pod \"91e46657-55ca-43e7-9a43-6bb875c7debf\" (UID: \"91e46657-55ca-43e7-9a43-6bb875c7debf\") " Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.684837 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/91e46657-55ca-43e7-9a43-6bb875c7debf-node-log\") pod \"91e46657-55ca-43e7-9a43-6bb875c7debf\" (UID: \"91e46657-55ca-43e7-9a43-6bb875c7debf\") " Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.684855 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/91e46657-55ca-43e7-9a43-6bb875c7debf-host-kubelet\") pod \"91e46657-55ca-43e7-9a43-6bb875c7debf\" (UID: \"91e46657-55ca-43e7-9a43-6bb875c7debf\") " Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.684894 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bfh6k\" (UniqueName: \"kubernetes.io/projected/91e46657-55ca-43e7-9a43-6bb875c7debf-kube-api-access-bfh6k\") pod \"91e46657-55ca-43e7-9a43-6bb875c7debf\" (UID: \"91e46657-55ca-43e7-9a43-6bb875c7debf\") " Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.684923 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/91e46657-55ca-43e7-9a43-6bb875c7debf-var-lib-openvswitch\") pod \"91e46657-55ca-43e7-9a43-6bb875c7debf\" (UID: \"91e46657-55ca-43e7-9a43-6bb875c7debf\") " Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.684948 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/91e46657-55ca-43e7-9a43-6bb875c7debf-host-cni-netd\") pod \"91e46657-55ca-43e7-9a43-6bb875c7debf\" (UID: \"91e46657-55ca-43e7-9a43-6bb875c7debf\") " Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.684969 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/91e46657-55ca-43e7-9a43-6bb875c7debf-host-run-ovn-kubernetes\") pod \"91e46657-55ca-43e7-9a43-6bb875c7debf\" (UID: \"91e46657-55ca-43e7-9a43-6bb875c7debf\") " Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.685371 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91e46657-55ca-43e7-9a43-6bb875c7debf-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "91e46657-55ca-43e7-9a43-6bb875c7debf" (UID: "91e46657-55ca-43e7-9a43-6bb875c7debf"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.685420 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91e46657-55ca-43e7-9a43-6bb875c7debf-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "91e46657-55ca-43e7-9a43-6bb875c7debf" (UID: "91e46657-55ca-43e7-9a43-6bb875c7debf"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.685450 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91e46657-55ca-43e7-9a43-6bb875c7debf-host-slash" (OuterVolumeSpecName: "host-slash") pod "91e46657-55ca-43e7-9a43-6bb875c7debf" (UID: "91e46657-55ca-43e7-9a43-6bb875c7debf"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.686631 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91e46657-55ca-43e7-9a43-6bb875c7debf-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "91e46657-55ca-43e7-9a43-6bb875c7debf" (UID: "91e46657-55ca-43e7-9a43-6bb875c7debf"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.686677 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91e46657-55ca-43e7-9a43-6bb875c7debf-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "91e46657-55ca-43e7-9a43-6bb875c7debf" (UID: "91e46657-55ca-43e7-9a43-6bb875c7debf"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.687491 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91e46657-55ca-43e7-9a43-6bb875c7debf-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "91e46657-55ca-43e7-9a43-6bb875c7debf" (UID: "91e46657-55ca-43e7-9a43-6bb875c7debf"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.689384 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91e46657-55ca-43e7-9a43-6bb875c7debf-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "91e46657-55ca-43e7-9a43-6bb875c7debf" (UID: "91e46657-55ca-43e7-9a43-6bb875c7debf"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.689479 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91e46657-55ca-43e7-9a43-6bb875c7debf-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "91e46657-55ca-43e7-9a43-6bb875c7debf" (UID: "91e46657-55ca-43e7-9a43-6bb875c7debf"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.689555 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91e46657-55ca-43e7-9a43-6bb875c7debf-node-log" (OuterVolumeSpecName: "node-log") pod "91e46657-55ca-43e7-9a43-6bb875c7debf" (UID: "91e46657-55ca-43e7-9a43-6bb875c7debf"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.689590 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91e46657-55ca-43e7-9a43-6bb875c7debf-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "91e46657-55ca-43e7-9a43-6bb875c7debf" (UID: "91e46657-55ca-43e7-9a43-6bb875c7debf"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.689701 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91e46657-55ca-43e7-9a43-6bb875c7debf-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "91e46657-55ca-43e7-9a43-6bb875c7debf" (UID: "91e46657-55ca-43e7-9a43-6bb875c7debf"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.689740 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91e46657-55ca-43e7-9a43-6bb875c7debf-log-socket" (OuterVolumeSpecName: "log-socket") pod "91e46657-55ca-43e7-9a43-6bb875c7debf" (UID: "91e46657-55ca-43e7-9a43-6bb875c7debf"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.689768 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91e46657-55ca-43e7-9a43-6bb875c7debf-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "91e46657-55ca-43e7-9a43-6bb875c7debf" (UID: "91e46657-55ca-43e7-9a43-6bb875c7debf"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.690083 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91e46657-55ca-43e7-9a43-6bb875c7debf-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "91e46657-55ca-43e7-9a43-6bb875c7debf" (UID: "91e46657-55ca-43e7-9a43-6bb875c7debf"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.690210 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91e46657-55ca-43e7-9a43-6bb875c7debf-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "91e46657-55ca-43e7-9a43-6bb875c7debf" (UID: "91e46657-55ca-43e7-9a43-6bb875c7debf"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.690256 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91e46657-55ca-43e7-9a43-6bb875c7debf-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "91e46657-55ca-43e7-9a43-6bb875c7debf" (UID: "91e46657-55ca-43e7-9a43-6bb875c7debf"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.690391 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91e46657-55ca-43e7-9a43-6bb875c7debf-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "91e46657-55ca-43e7-9a43-6bb875c7debf" (UID: "91e46657-55ca-43e7-9a43-6bb875c7debf"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.696395 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91e46657-55ca-43e7-9a43-6bb875c7debf-kube-api-access-bfh6k" (OuterVolumeSpecName: "kube-api-access-bfh6k") pod "91e46657-55ca-43e7-9a43-6bb875c7debf" (UID: "91e46657-55ca-43e7-9a43-6bb875c7debf"). InnerVolumeSpecName "kube-api-access-bfh6k". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.696439 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91e46657-55ca-43e7-9a43-6bb875c7debf-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "91e46657-55ca-43e7-9a43-6bb875c7debf" (UID: "91e46657-55ca-43e7-9a43-6bb875c7debf"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.705980 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91e46657-55ca-43e7-9a43-6bb875c7debf-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "91e46657-55ca-43e7-9a43-6bb875c7debf" (UID: "91e46657-55ca-43e7-9a43-6bb875c7debf"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.786711 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0-var-lib-openvswitch\") pod \"ovnkube-node-slcxb\" (UID: \"6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0\") " pod="openshift-ovn-kubernetes/ovnkube-node-slcxb" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.787100 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0-ovnkube-config\") pod \"ovnkube-node-slcxb\" (UID: \"6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0\") " pod="openshift-ovn-kubernetes/ovnkube-node-slcxb" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.787127 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0-host-slash\") pod \"ovnkube-node-slcxb\" (UID: \"6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0\") " pod="openshift-ovn-kubernetes/ovnkube-node-slcxb" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.787146 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0-host-kubelet\") pod \"ovnkube-node-slcxb\" (UID: \"6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0\") " pod="openshift-ovn-kubernetes/ovnkube-node-slcxb" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.787200 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0-env-overrides\") pod \"ovnkube-node-slcxb\" (UID: \"6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0\") " pod="openshift-ovn-kubernetes/ovnkube-node-slcxb" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.787228 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0-run-systemd\") pod \"ovnkube-node-slcxb\" (UID: \"6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0\") " pod="openshift-ovn-kubernetes/ovnkube-node-slcxb" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.787250 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0-ovn-node-metrics-cert\") pod \"ovnkube-node-slcxb\" (UID: \"6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0\") " pod="openshift-ovn-kubernetes/ovnkube-node-slcxb" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.787279 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0-systemd-units\") pod \"ovnkube-node-slcxb\" (UID: \"6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0\") " pod="openshift-ovn-kubernetes/ovnkube-node-slcxb" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.787303 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0-run-ovn\") pod \"ovnkube-node-slcxb\" (UID: \"6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0\") " pod="openshift-ovn-kubernetes/ovnkube-node-slcxb" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.787326 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0-ovnkube-script-lib\") pod \"ovnkube-node-slcxb\" (UID: \"6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0\") " pod="openshift-ovn-kubernetes/ovnkube-node-slcxb" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.787436 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0-etc-openvswitch\") pod \"ovnkube-node-slcxb\" (UID: \"6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0\") " pod="openshift-ovn-kubernetes/ovnkube-node-slcxb" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.787499 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0-run-openvswitch\") pod \"ovnkube-node-slcxb\" (UID: \"6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0\") " pod="openshift-ovn-kubernetes/ovnkube-node-slcxb" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.787527 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0-host-run-ovn-kubernetes\") pod \"ovnkube-node-slcxb\" (UID: \"6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0\") " pod="openshift-ovn-kubernetes/ovnkube-node-slcxb" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.787553 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0-node-log\") pod \"ovnkube-node-slcxb\" (UID: \"6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0\") " pod="openshift-ovn-kubernetes/ovnkube-node-slcxb" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.787597 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0-host-run-netns\") pod \"ovnkube-node-slcxb\" (UID: \"6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0\") " pod="openshift-ovn-kubernetes/ovnkube-node-slcxb" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.787629 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0-host-cni-bin\") pod \"ovnkube-node-slcxb\" (UID: \"6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0\") " pod="openshift-ovn-kubernetes/ovnkube-node-slcxb" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.787660 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0-log-socket\") pod \"ovnkube-node-slcxb\" (UID: \"6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0\") " pod="openshift-ovn-kubernetes/ovnkube-node-slcxb" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.787684 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgsrj\" (UniqueName: \"kubernetes.io/projected/6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0-kube-api-access-tgsrj\") pod \"ovnkube-node-slcxb\" (UID: \"6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0\") " pod="openshift-ovn-kubernetes/ovnkube-node-slcxb" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.787756 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-slcxb\" (UID: \"6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0\") " pod="openshift-ovn-kubernetes/ovnkube-node-slcxb" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.787788 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0-host-cni-netd\") pod \"ovnkube-node-slcxb\" (UID: \"6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0\") " pod="openshift-ovn-kubernetes/ovnkube-node-slcxb" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.787878 5118 reconciler_common.go:299] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/91e46657-55ca-43e7-9a43-6bb875c7debf-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.787892 5118 reconciler_common.go:299] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/91e46657-55ca-43e7-9a43-6bb875c7debf-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.787906 5118 reconciler_common.go:299] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/91e46657-55ca-43e7-9a43-6bb875c7debf-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.787919 5118 reconciler_common.go:299] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/91e46657-55ca-43e7-9a43-6bb875c7debf-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.787930 5118 reconciler_common.go:299] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/91e46657-55ca-43e7-9a43-6bb875c7debf-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.787941 5118 reconciler_common.go:299] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/91e46657-55ca-43e7-9a43-6bb875c7debf-log-socket\") on node \"crc\" DevicePath \"\"" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.787952 5118 reconciler_common.go:299] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/91e46657-55ca-43e7-9a43-6bb875c7debf-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.787963 5118 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/91e46657-55ca-43e7-9a43-6bb875c7debf-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.787974 5118 reconciler_common.go:299] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/91e46657-55ca-43e7-9a43-6bb875c7debf-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.787988 5118 reconciler_common.go:299] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/91e46657-55ca-43e7-9a43-6bb875c7debf-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.788000 5118 reconciler_common.go:299] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/91e46657-55ca-43e7-9a43-6bb875c7debf-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.788013 5118 reconciler_common.go:299] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/91e46657-55ca-43e7-9a43-6bb875c7debf-node-log\") on node \"crc\" DevicePath \"\"" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.788023 5118 reconciler_common.go:299] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/91e46657-55ca-43e7-9a43-6bb875c7debf-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.788034 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bfh6k\" (UniqueName: \"kubernetes.io/projected/91e46657-55ca-43e7-9a43-6bb875c7debf-kube-api-access-bfh6k\") on node \"crc\" DevicePath \"\"" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.788057 5118 reconciler_common.go:299] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/91e46657-55ca-43e7-9a43-6bb875c7debf-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.788068 5118 reconciler_common.go:299] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/91e46657-55ca-43e7-9a43-6bb875c7debf-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.788082 5118 reconciler_common.go:299] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/91e46657-55ca-43e7-9a43-6bb875c7debf-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.788093 5118 reconciler_common.go:299] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/91e46657-55ca-43e7-9a43-6bb875c7debf-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.788104 5118 reconciler_common.go:299] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/91e46657-55ca-43e7-9a43-6bb875c7debf-host-slash\") on node \"crc\" DevicePath \"\"" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.788115 5118 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/91e46657-55ca-43e7-9a43-6bb875c7debf-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.889085 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-slcxb\" (UID: \"6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0\") " pod="openshift-ovn-kubernetes/ovnkube-node-slcxb" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.889152 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0-host-cni-netd\") pod \"ovnkube-node-slcxb\" (UID: \"6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0\") " pod="openshift-ovn-kubernetes/ovnkube-node-slcxb" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.889216 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0-var-lib-openvswitch\") pod \"ovnkube-node-slcxb\" (UID: \"6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0\") " pod="openshift-ovn-kubernetes/ovnkube-node-slcxb" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.889257 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0-ovnkube-config\") pod \"ovnkube-node-slcxb\" (UID: \"6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0\") " pod="openshift-ovn-kubernetes/ovnkube-node-slcxb" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.889255 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-slcxb\" (UID: \"6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0\") " pod="openshift-ovn-kubernetes/ovnkube-node-slcxb" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.889293 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0-var-lib-openvswitch\") pod \"ovnkube-node-slcxb\" (UID: \"6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0\") " pod="openshift-ovn-kubernetes/ovnkube-node-slcxb" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.889340 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0-host-cni-netd\") pod \"ovnkube-node-slcxb\" (UID: \"6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0\") " pod="openshift-ovn-kubernetes/ovnkube-node-slcxb" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.889393 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0-host-slash\") pod \"ovnkube-node-slcxb\" (UID: \"6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0\") " pod="openshift-ovn-kubernetes/ovnkube-node-slcxb" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.889447 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0-host-kubelet\") pod \"ovnkube-node-slcxb\" (UID: \"6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0\") " pod="openshift-ovn-kubernetes/ovnkube-node-slcxb" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.889478 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0-env-overrides\") pod \"ovnkube-node-slcxb\" (UID: \"6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0\") " pod="openshift-ovn-kubernetes/ovnkube-node-slcxb" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.889503 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0-run-systemd\") pod \"ovnkube-node-slcxb\" (UID: \"6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0\") " pod="openshift-ovn-kubernetes/ovnkube-node-slcxb" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.889506 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0-host-slash\") pod \"ovnkube-node-slcxb\" (UID: \"6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0\") " pod="openshift-ovn-kubernetes/ovnkube-node-slcxb" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.889526 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0-ovn-node-metrics-cert\") pod \"ovnkube-node-slcxb\" (UID: \"6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0\") " pod="openshift-ovn-kubernetes/ovnkube-node-slcxb" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.889556 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0-systemd-units\") pod \"ovnkube-node-slcxb\" (UID: \"6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0\") " pod="openshift-ovn-kubernetes/ovnkube-node-slcxb" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.889508 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0-host-kubelet\") pod \"ovnkube-node-slcxb\" (UID: \"6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0\") " pod="openshift-ovn-kubernetes/ovnkube-node-slcxb" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.889611 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0-systemd-units\") pod \"ovnkube-node-slcxb\" (UID: \"6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0\") " pod="openshift-ovn-kubernetes/ovnkube-node-slcxb" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.889578 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0-run-systemd\") pod \"ovnkube-node-slcxb\" (UID: \"6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0\") " pod="openshift-ovn-kubernetes/ovnkube-node-slcxb" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.889596 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0-run-ovn\") pod \"ovnkube-node-slcxb\" (UID: \"6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0\") " pod="openshift-ovn-kubernetes/ovnkube-node-slcxb" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.889574 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0-run-ovn\") pod \"ovnkube-node-slcxb\" (UID: \"6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0\") " pod="openshift-ovn-kubernetes/ovnkube-node-slcxb" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.889719 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0-ovnkube-script-lib\") pod \"ovnkube-node-slcxb\" (UID: \"6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0\") " pod="openshift-ovn-kubernetes/ovnkube-node-slcxb" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.889764 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0-etc-openvswitch\") pod \"ovnkube-node-slcxb\" (UID: \"6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0\") " pod="openshift-ovn-kubernetes/ovnkube-node-slcxb" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.889804 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0-run-openvswitch\") pod \"ovnkube-node-slcxb\" (UID: \"6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0\") " pod="openshift-ovn-kubernetes/ovnkube-node-slcxb" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.889828 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0-host-run-ovn-kubernetes\") pod \"ovnkube-node-slcxb\" (UID: \"6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0\") " pod="openshift-ovn-kubernetes/ovnkube-node-slcxb" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.889845 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0-node-log\") pod \"ovnkube-node-slcxb\" (UID: \"6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0\") " pod="openshift-ovn-kubernetes/ovnkube-node-slcxb" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.889886 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0-host-run-netns\") pod \"ovnkube-node-slcxb\" (UID: \"6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0\") " pod="openshift-ovn-kubernetes/ovnkube-node-slcxb" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.889911 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0-host-cni-bin\") pod \"ovnkube-node-slcxb\" (UID: \"6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0\") " pod="openshift-ovn-kubernetes/ovnkube-node-slcxb" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.889935 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0-log-socket\") pod \"ovnkube-node-slcxb\" (UID: \"6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0\") " pod="openshift-ovn-kubernetes/ovnkube-node-slcxb" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.889948 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tgsrj\" (UniqueName: \"kubernetes.io/projected/6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0-kube-api-access-tgsrj\") pod \"ovnkube-node-slcxb\" (UID: \"6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0\") " pod="openshift-ovn-kubernetes/ovnkube-node-slcxb" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.890032 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0-env-overrides\") pod \"ovnkube-node-slcxb\" (UID: \"6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0\") " pod="openshift-ovn-kubernetes/ovnkube-node-slcxb" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.890071 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0-host-run-ovn-kubernetes\") pod \"ovnkube-node-slcxb\" (UID: \"6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0\") " pod="openshift-ovn-kubernetes/ovnkube-node-slcxb" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.890224 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0-ovnkube-config\") pod \"ovnkube-node-slcxb\" (UID: \"6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0\") " pod="openshift-ovn-kubernetes/ovnkube-node-slcxb" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.890287 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0-host-run-netns\") pod \"ovnkube-node-slcxb\" (UID: \"6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0\") " pod="openshift-ovn-kubernetes/ovnkube-node-slcxb" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.890320 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0-node-log\") pod \"ovnkube-node-slcxb\" (UID: \"6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0\") " pod="openshift-ovn-kubernetes/ovnkube-node-slcxb" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.890360 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0-host-cni-bin\") pod \"ovnkube-node-slcxb\" (UID: \"6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0\") " pod="openshift-ovn-kubernetes/ovnkube-node-slcxb" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.890385 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0-log-socket\") pod \"ovnkube-node-slcxb\" (UID: \"6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0\") " pod="openshift-ovn-kubernetes/ovnkube-node-slcxb" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.890392 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0-etc-openvswitch\") pod \"ovnkube-node-slcxb\" (UID: \"6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0\") " pod="openshift-ovn-kubernetes/ovnkube-node-slcxb" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.890437 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0-run-openvswitch\") pod \"ovnkube-node-slcxb\" (UID: \"6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0\") " pod="openshift-ovn-kubernetes/ovnkube-node-slcxb" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.890562 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0-ovnkube-script-lib\") pod \"ovnkube-node-slcxb\" (UID: \"6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0\") " pod="openshift-ovn-kubernetes/ovnkube-node-slcxb" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.895640 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0-ovn-node-metrics-cert\") pod \"ovnkube-node-slcxb\" (UID: \"6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0\") " pod="openshift-ovn-kubernetes/ovnkube-node-slcxb" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.910812 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgsrj\" (UniqueName: \"kubernetes.io/projected/6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0-kube-api-access-tgsrj\") pod \"ovnkube-node-slcxb\" (UID: \"6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0\") " pod="openshift-ovn-kubernetes/ovnkube-node-slcxb" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.973204 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-slcxb" Jan 21 00:19:46 crc kubenswrapper[5118]: I0121 00:19:46.981946 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ddc3c284-5d85-4e40-b285-f16062ad8d9c" path="/var/lib/kubelet/pods/ddc3c284-5d85-4e40-b285-f16062ad8d9c/volumes" Jan 21 00:19:46 crc kubenswrapper[5118]: W0121 00:19:46.990863 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6b0c24bf_21c3_43a2_b859_8bd6b31e6ad0.slice/crio-9b8ea4a9585cf66016dc636a1c0e9f6ebbf9abe203f5e6aaa9941386ce49e7a8 WatchSource:0}: Error finding container 9b8ea4a9585cf66016dc636a1c0e9f6ebbf9abe203f5e6aaa9941386ce49e7a8: Status 404 returned error can't find the container with id 9b8ea4a9585cf66016dc636a1c0e9f6ebbf9abe203f5e6aaa9941386ce49e7a8 Jan 21 00:19:47 crc kubenswrapper[5118]: I0121 00:19:47.345514 5118 generic.go:358] "Generic (PLEG): container finished" podID="6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0" containerID="ae52b5d58398266b1fb05f4e2686980e46068e090b0fb17d26201639d43a745a" exitCode=0 Jan 21 00:19:47 crc kubenswrapper[5118]: I0121 00:19:47.345596 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-slcxb" event={"ID":"6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0","Type":"ContainerDied","Data":"ae52b5d58398266b1fb05f4e2686980e46068e090b0fb17d26201639d43a745a"} Jan 21 00:19:47 crc kubenswrapper[5118]: I0121 00:19:47.345641 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-slcxb" event={"ID":"6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0","Type":"ContainerStarted","Data":"9b8ea4a9585cf66016dc636a1c0e9f6ebbf9abe203f5e6aaa9941386ce49e7a8"} Jan 21 00:19:47 crc kubenswrapper[5118]: I0121 00:19:47.358520 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-h8fs2_91e46657-55ca-43e7-9a43-6bb875c7debf/ovn-acl-logging/0.log" Jan 21 00:19:47 crc kubenswrapper[5118]: I0121 00:19:47.359033 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-h8fs2_91e46657-55ca-43e7-9a43-6bb875c7debf/ovn-controller/0.log" Jan 21 00:19:47 crc kubenswrapper[5118]: I0121 00:19:47.359593 5118 generic.go:358] "Generic (PLEG): container finished" podID="91e46657-55ca-43e7-9a43-6bb875c7debf" containerID="cd6b0e74901b77629ea9cf2a9f6ed14b9dadab5da55771b2c4329c542b5dde1a" exitCode=0 Jan 21 00:19:47 crc kubenswrapper[5118]: I0121 00:19:47.359625 5118 generic.go:358] "Generic (PLEG): container finished" podID="91e46657-55ca-43e7-9a43-6bb875c7debf" containerID="47304a559d2c42615011fabfa78530d881d2f5b7e75c9f0d869341e77027c9d9" exitCode=0 Jan 21 00:19:47 crc kubenswrapper[5118]: I0121 00:19:47.359637 5118 generic.go:358] "Generic (PLEG): container finished" podID="91e46657-55ca-43e7-9a43-6bb875c7debf" containerID="1d172206bbe6a689c32082da3fb77a7d564d2a522a9cc4b7d70193aebfc326a1" exitCode=0 Jan 21 00:19:47 crc kubenswrapper[5118]: I0121 00:19:47.359633 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h8fs2" event={"ID":"91e46657-55ca-43e7-9a43-6bb875c7debf","Type":"ContainerDied","Data":"cd6b0e74901b77629ea9cf2a9f6ed14b9dadab5da55771b2c4329c542b5dde1a"} Jan 21 00:19:47 crc kubenswrapper[5118]: I0121 00:19:47.359702 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h8fs2" event={"ID":"91e46657-55ca-43e7-9a43-6bb875c7debf","Type":"ContainerDied","Data":"47304a559d2c42615011fabfa78530d881d2f5b7e75c9f0d869341e77027c9d9"} Jan 21 00:19:47 crc kubenswrapper[5118]: I0121 00:19:47.359718 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h8fs2" event={"ID":"91e46657-55ca-43e7-9a43-6bb875c7debf","Type":"ContainerDied","Data":"1d172206bbe6a689c32082da3fb77a7d564d2a522a9cc4b7d70193aebfc326a1"} Jan 21 00:19:47 crc kubenswrapper[5118]: I0121 00:19:47.359730 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h8fs2" event={"ID":"91e46657-55ca-43e7-9a43-6bb875c7debf","Type":"ContainerDied","Data":"c0ec43a4f1b8caf57b219eb8283d87eadc74827c740b8e6a175c044f08150495"} Jan 21 00:19:47 crc kubenswrapper[5118]: I0121 00:19:47.359728 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-h8fs2" Jan 21 00:19:47 crc kubenswrapper[5118]: I0121 00:19:47.359748 5118 scope.go:117] "RemoveContainer" containerID="cd6b0e74901b77629ea9cf2a9f6ed14b9dadab5da55771b2c4329c542b5dde1a" Jan 21 00:19:47 crc kubenswrapper[5118]: I0121 00:19:47.362421 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-grht9" event={"ID":"4911732e-00d3-4732-bee3-18f866800bde","Type":"ContainerStarted","Data":"d108e4a7a9e8120c3d39f2c8fd2c7a3a7d538a249f9d97124525bb25c5a98bce"} Jan 21 00:19:47 crc kubenswrapper[5118]: I0121 00:19:47.362459 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-grht9" event={"ID":"4911732e-00d3-4732-bee3-18f866800bde","Type":"ContainerStarted","Data":"5cc6be48a5f6c40456bf2131054ec13c824d798cbb6cf82cdc9e10265d13e703"} Jan 21 00:19:47 crc kubenswrapper[5118]: I0121 00:19:47.362470 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-grht9" event={"ID":"4911732e-00d3-4732-bee3-18f866800bde","Type":"ContainerStarted","Data":"6d4db7a8d60c64969d6f2c33c3dbc70839bea579f35199b96358a389c9b05b87"} Jan 21 00:19:47 crc kubenswrapper[5118]: I0121 00:19:47.366675 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qcqwq_7c0390f5-26b4-4299-958c-acac058be619/kube-multus/0.log" Jan 21 00:19:47 crc kubenswrapper[5118]: I0121 00:19:47.366759 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-qcqwq" event={"ID":"7c0390f5-26b4-4299-958c-acac058be619","Type":"ContainerStarted","Data":"5d04f5dbed310ebf98940a711fc93c571761f6a090f24cd4c8a9cb79ff7fd159"} Jan 21 00:19:47 crc kubenswrapper[5118]: I0121 00:19:47.379951 5118 scope.go:117] "RemoveContainer" containerID="47304a559d2c42615011fabfa78530d881d2f5b7e75c9f0d869341e77027c9d9" Jan 21 00:19:47 crc kubenswrapper[5118]: I0121 00:19:47.397063 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-h8fs2"] Jan 21 00:19:47 crc kubenswrapper[5118]: I0121 00:19:47.402861 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-h8fs2"] Jan 21 00:19:47 crc kubenswrapper[5118]: I0121 00:19:47.404474 5118 scope.go:117] "RemoveContainer" containerID="e0443ae711ad7be79ef581edaa3a7e5dc22631f021e1f4072d61e2705eaa2f1e" Jan 21 00:19:47 crc kubenswrapper[5118]: I0121 00:19:47.423255 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-grht9" podStartSLOduration=2.423234758 podStartE2EDuration="2.423234758s" podCreationTimestamp="2026-01-21 00:19:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:19:47.422141779 +0000 UTC m=+642.746388797" watchObservedRunningTime="2026-01-21 00:19:47.423234758 +0000 UTC m=+642.747481796" Jan 21 00:19:47 crc kubenswrapper[5118]: I0121 00:19:47.437664 5118 scope.go:117] "RemoveContainer" containerID="1d172206bbe6a689c32082da3fb77a7d564d2a522a9cc4b7d70193aebfc326a1" Jan 21 00:19:47 crc kubenswrapper[5118]: I0121 00:19:47.456806 5118 scope.go:117] "RemoveContainer" containerID="6cec5a44434a07153f1d3e9910d8fd3d9a2ec37767604df403adafe3ad6eaf2f" Jan 21 00:19:47 crc kubenswrapper[5118]: I0121 00:19:47.468913 5118 scope.go:117] "RemoveContainer" containerID="44654afa2c6f84f6cc69686c1499e120914b6742b725292990c497aa516ae5e3" Jan 21 00:19:47 crc kubenswrapper[5118]: I0121 00:19:47.480299 5118 scope.go:117] "RemoveContainer" containerID="5bb9b3c5f6f84029c1079215b2b2e783858fbe9d0ffff553ce9c8357e054c961" Jan 21 00:19:47 crc kubenswrapper[5118]: I0121 00:19:47.492726 5118 scope.go:117] "RemoveContainer" containerID="3752a9e4c7c738fecf3e21570f930846192609fc9b64b76e9895d30662f15407" Jan 21 00:19:47 crc kubenswrapper[5118]: I0121 00:19:47.507899 5118 scope.go:117] "RemoveContainer" containerID="9fd0d8aa06981c2157fb923838a8a8cb7c4ad93337f136546214029480b3dc3e" Jan 21 00:19:47 crc kubenswrapper[5118]: I0121 00:19:47.519353 5118 scope.go:117] "RemoveContainer" containerID="cd6b0e74901b77629ea9cf2a9f6ed14b9dadab5da55771b2c4329c542b5dde1a" Jan 21 00:19:47 crc kubenswrapper[5118]: E0121 00:19:47.519782 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd6b0e74901b77629ea9cf2a9f6ed14b9dadab5da55771b2c4329c542b5dde1a\": container with ID starting with cd6b0e74901b77629ea9cf2a9f6ed14b9dadab5da55771b2c4329c542b5dde1a not found: ID does not exist" containerID="cd6b0e74901b77629ea9cf2a9f6ed14b9dadab5da55771b2c4329c542b5dde1a" Jan 21 00:19:47 crc kubenswrapper[5118]: I0121 00:19:47.519838 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd6b0e74901b77629ea9cf2a9f6ed14b9dadab5da55771b2c4329c542b5dde1a"} err="failed to get container status \"cd6b0e74901b77629ea9cf2a9f6ed14b9dadab5da55771b2c4329c542b5dde1a\": rpc error: code = NotFound desc = could not find container \"cd6b0e74901b77629ea9cf2a9f6ed14b9dadab5da55771b2c4329c542b5dde1a\": container with ID starting with cd6b0e74901b77629ea9cf2a9f6ed14b9dadab5da55771b2c4329c542b5dde1a not found: ID does not exist" Jan 21 00:19:47 crc kubenswrapper[5118]: I0121 00:19:47.519878 5118 scope.go:117] "RemoveContainer" containerID="47304a559d2c42615011fabfa78530d881d2f5b7e75c9f0d869341e77027c9d9" Jan 21 00:19:47 crc kubenswrapper[5118]: E0121 00:19:47.520297 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47304a559d2c42615011fabfa78530d881d2f5b7e75c9f0d869341e77027c9d9\": container with ID starting with 47304a559d2c42615011fabfa78530d881d2f5b7e75c9f0d869341e77027c9d9 not found: ID does not exist" containerID="47304a559d2c42615011fabfa78530d881d2f5b7e75c9f0d869341e77027c9d9" Jan 21 00:19:47 crc kubenswrapper[5118]: I0121 00:19:47.520335 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47304a559d2c42615011fabfa78530d881d2f5b7e75c9f0d869341e77027c9d9"} err="failed to get container status \"47304a559d2c42615011fabfa78530d881d2f5b7e75c9f0d869341e77027c9d9\": rpc error: code = NotFound desc = could not find container \"47304a559d2c42615011fabfa78530d881d2f5b7e75c9f0d869341e77027c9d9\": container with ID starting with 47304a559d2c42615011fabfa78530d881d2f5b7e75c9f0d869341e77027c9d9 not found: ID does not exist" Jan 21 00:19:47 crc kubenswrapper[5118]: I0121 00:19:47.520363 5118 scope.go:117] "RemoveContainer" containerID="e0443ae711ad7be79ef581edaa3a7e5dc22631f021e1f4072d61e2705eaa2f1e" Jan 21 00:19:47 crc kubenswrapper[5118]: E0121 00:19:47.520614 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0443ae711ad7be79ef581edaa3a7e5dc22631f021e1f4072d61e2705eaa2f1e\": container with ID starting with e0443ae711ad7be79ef581edaa3a7e5dc22631f021e1f4072d61e2705eaa2f1e not found: ID does not exist" containerID="e0443ae711ad7be79ef581edaa3a7e5dc22631f021e1f4072d61e2705eaa2f1e" Jan 21 00:19:47 crc kubenswrapper[5118]: I0121 00:19:47.520635 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0443ae711ad7be79ef581edaa3a7e5dc22631f021e1f4072d61e2705eaa2f1e"} err="failed to get container status \"e0443ae711ad7be79ef581edaa3a7e5dc22631f021e1f4072d61e2705eaa2f1e\": rpc error: code = NotFound desc = could not find container \"e0443ae711ad7be79ef581edaa3a7e5dc22631f021e1f4072d61e2705eaa2f1e\": container with ID starting with e0443ae711ad7be79ef581edaa3a7e5dc22631f021e1f4072d61e2705eaa2f1e not found: ID does not exist" Jan 21 00:19:47 crc kubenswrapper[5118]: I0121 00:19:47.520650 5118 scope.go:117] "RemoveContainer" containerID="1d172206bbe6a689c32082da3fb77a7d564d2a522a9cc4b7d70193aebfc326a1" Jan 21 00:19:47 crc kubenswrapper[5118]: E0121 00:19:47.520998 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d172206bbe6a689c32082da3fb77a7d564d2a522a9cc4b7d70193aebfc326a1\": container with ID starting with 1d172206bbe6a689c32082da3fb77a7d564d2a522a9cc4b7d70193aebfc326a1 not found: ID does not exist" containerID="1d172206bbe6a689c32082da3fb77a7d564d2a522a9cc4b7d70193aebfc326a1" Jan 21 00:19:47 crc kubenswrapper[5118]: I0121 00:19:47.521028 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d172206bbe6a689c32082da3fb77a7d564d2a522a9cc4b7d70193aebfc326a1"} err="failed to get container status \"1d172206bbe6a689c32082da3fb77a7d564d2a522a9cc4b7d70193aebfc326a1\": rpc error: code = NotFound desc = could not find container \"1d172206bbe6a689c32082da3fb77a7d564d2a522a9cc4b7d70193aebfc326a1\": container with ID starting with 1d172206bbe6a689c32082da3fb77a7d564d2a522a9cc4b7d70193aebfc326a1 not found: ID does not exist" Jan 21 00:19:47 crc kubenswrapper[5118]: I0121 00:19:47.521051 5118 scope.go:117] "RemoveContainer" containerID="6cec5a44434a07153f1d3e9910d8fd3d9a2ec37767604df403adafe3ad6eaf2f" Jan 21 00:19:47 crc kubenswrapper[5118]: E0121 00:19:47.521347 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6cec5a44434a07153f1d3e9910d8fd3d9a2ec37767604df403adafe3ad6eaf2f\": container with ID starting with 6cec5a44434a07153f1d3e9910d8fd3d9a2ec37767604df403adafe3ad6eaf2f not found: ID does not exist" containerID="6cec5a44434a07153f1d3e9910d8fd3d9a2ec37767604df403adafe3ad6eaf2f" Jan 21 00:19:47 crc kubenswrapper[5118]: I0121 00:19:47.521375 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6cec5a44434a07153f1d3e9910d8fd3d9a2ec37767604df403adafe3ad6eaf2f"} err="failed to get container status \"6cec5a44434a07153f1d3e9910d8fd3d9a2ec37767604df403adafe3ad6eaf2f\": rpc error: code = NotFound desc = could not find container \"6cec5a44434a07153f1d3e9910d8fd3d9a2ec37767604df403adafe3ad6eaf2f\": container with ID starting with 6cec5a44434a07153f1d3e9910d8fd3d9a2ec37767604df403adafe3ad6eaf2f not found: ID does not exist" Jan 21 00:19:47 crc kubenswrapper[5118]: I0121 00:19:47.521392 5118 scope.go:117] "RemoveContainer" containerID="44654afa2c6f84f6cc69686c1499e120914b6742b725292990c497aa516ae5e3" Jan 21 00:19:47 crc kubenswrapper[5118]: E0121 00:19:47.521647 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44654afa2c6f84f6cc69686c1499e120914b6742b725292990c497aa516ae5e3\": container with ID starting with 44654afa2c6f84f6cc69686c1499e120914b6742b725292990c497aa516ae5e3 not found: ID does not exist" containerID="44654afa2c6f84f6cc69686c1499e120914b6742b725292990c497aa516ae5e3" Jan 21 00:19:47 crc kubenswrapper[5118]: I0121 00:19:47.521675 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44654afa2c6f84f6cc69686c1499e120914b6742b725292990c497aa516ae5e3"} err="failed to get container status \"44654afa2c6f84f6cc69686c1499e120914b6742b725292990c497aa516ae5e3\": rpc error: code = NotFound desc = could not find container \"44654afa2c6f84f6cc69686c1499e120914b6742b725292990c497aa516ae5e3\": container with ID starting with 44654afa2c6f84f6cc69686c1499e120914b6742b725292990c497aa516ae5e3 not found: ID does not exist" Jan 21 00:19:47 crc kubenswrapper[5118]: I0121 00:19:47.521693 5118 scope.go:117] "RemoveContainer" containerID="5bb9b3c5f6f84029c1079215b2b2e783858fbe9d0ffff553ce9c8357e054c961" Jan 21 00:19:47 crc kubenswrapper[5118]: E0121 00:19:47.521909 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5bb9b3c5f6f84029c1079215b2b2e783858fbe9d0ffff553ce9c8357e054c961\": container with ID starting with 5bb9b3c5f6f84029c1079215b2b2e783858fbe9d0ffff553ce9c8357e054c961 not found: ID does not exist" containerID="5bb9b3c5f6f84029c1079215b2b2e783858fbe9d0ffff553ce9c8357e054c961" Jan 21 00:19:47 crc kubenswrapper[5118]: I0121 00:19:47.521936 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5bb9b3c5f6f84029c1079215b2b2e783858fbe9d0ffff553ce9c8357e054c961"} err="failed to get container status \"5bb9b3c5f6f84029c1079215b2b2e783858fbe9d0ffff553ce9c8357e054c961\": rpc error: code = NotFound desc = could not find container \"5bb9b3c5f6f84029c1079215b2b2e783858fbe9d0ffff553ce9c8357e054c961\": container with ID starting with 5bb9b3c5f6f84029c1079215b2b2e783858fbe9d0ffff553ce9c8357e054c961 not found: ID does not exist" Jan 21 00:19:47 crc kubenswrapper[5118]: I0121 00:19:47.521955 5118 scope.go:117] "RemoveContainer" containerID="3752a9e4c7c738fecf3e21570f930846192609fc9b64b76e9895d30662f15407" Jan 21 00:19:47 crc kubenswrapper[5118]: E0121 00:19:47.522141 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3752a9e4c7c738fecf3e21570f930846192609fc9b64b76e9895d30662f15407\": container with ID starting with 3752a9e4c7c738fecf3e21570f930846192609fc9b64b76e9895d30662f15407 not found: ID does not exist" containerID="3752a9e4c7c738fecf3e21570f930846192609fc9b64b76e9895d30662f15407" Jan 21 00:19:47 crc kubenswrapper[5118]: I0121 00:19:47.522197 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3752a9e4c7c738fecf3e21570f930846192609fc9b64b76e9895d30662f15407"} err="failed to get container status \"3752a9e4c7c738fecf3e21570f930846192609fc9b64b76e9895d30662f15407\": rpc error: code = NotFound desc = could not find container \"3752a9e4c7c738fecf3e21570f930846192609fc9b64b76e9895d30662f15407\": container with ID starting with 3752a9e4c7c738fecf3e21570f930846192609fc9b64b76e9895d30662f15407 not found: ID does not exist" Jan 21 00:19:47 crc kubenswrapper[5118]: I0121 00:19:47.522214 5118 scope.go:117] "RemoveContainer" containerID="9fd0d8aa06981c2157fb923838a8a8cb7c4ad93337f136546214029480b3dc3e" Jan 21 00:19:47 crc kubenswrapper[5118]: E0121 00:19:47.522500 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9fd0d8aa06981c2157fb923838a8a8cb7c4ad93337f136546214029480b3dc3e\": container with ID starting with 9fd0d8aa06981c2157fb923838a8a8cb7c4ad93337f136546214029480b3dc3e not found: ID does not exist" containerID="9fd0d8aa06981c2157fb923838a8a8cb7c4ad93337f136546214029480b3dc3e" Jan 21 00:19:47 crc kubenswrapper[5118]: I0121 00:19:47.522528 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fd0d8aa06981c2157fb923838a8a8cb7c4ad93337f136546214029480b3dc3e"} err="failed to get container status \"9fd0d8aa06981c2157fb923838a8a8cb7c4ad93337f136546214029480b3dc3e\": rpc error: code = NotFound desc = could not find container \"9fd0d8aa06981c2157fb923838a8a8cb7c4ad93337f136546214029480b3dc3e\": container with ID starting with 9fd0d8aa06981c2157fb923838a8a8cb7c4ad93337f136546214029480b3dc3e not found: ID does not exist" Jan 21 00:19:47 crc kubenswrapper[5118]: I0121 00:19:47.522546 5118 scope.go:117] "RemoveContainer" containerID="cd6b0e74901b77629ea9cf2a9f6ed14b9dadab5da55771b2c4329c542b5dde1a" Jan 21 00:19:47 crc kubenswrapper[5118]: I0121 00:19:47.522771 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd6b0e74901b77629ea9cf2a9f6ed14b9dadab5da55771b2c4329c542b5dde1a"} err="failed to get container status \"cd6b0e74901b77629ea9cf2a9f6ed14b9dadab5da55771b2c4329c542b5dde1a\": rpc error: code = NotFound desc = could not find container \"cd6b0e74901b77629ea9cf2a9f6ed14b9dadab5da55771b2c4329c542b5dde1a\": container with ID starting with cd6b0e74901b77629ea9cf2a9f6ed14b9dadab5da55771b2c4329c542b5dde1a not found: ID does not exist" Jan 21 00:19:47 crc kubenswrapper[5118]: I0121 00:19:47.522799 5118 scope.go:117] "RemoveContainer" containerID="47304a559d2c42615011fabfa78530d881d2f5b7e75c9f0d869341e77027c9d9" Jan 21 00:19:47 crc kubenswrapper[5118]: I0121 00:19:47.523039 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47304a559d2c42615011fabfa78530d881d2f5b7e75c9f0d869341e77027c9d9"} err="failed to get container status \"47304a559d2c42615011fabfa78530d881d2f5b7e75c9f0d869341e77027c9d9\": rpc error: code = NotFound desc = could not find container \"47304a559d2c42615011fabfa78530d881d2f5b7e75c9f0d869341e77027c9d9\": container with ID starting with 47304a559d2c42615011fabfa78530d881d2f5b7e75c9f0d869341e77027c9d9 not found: ID does not exist" Jan 21 00:19:47 crc kubenswrapper[5118]: I0121 00:19:47.523058 5118 scope.go:117] "RemoveContainer" containerID="e0443ae711ad7be79ef581edaa3a7e5dc22631f021e1f4072d61e2705eaa2f1e" Jan 21 00:19:47 crc kubenswrapper[5118]: I0121 00:19:47.523280 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0443ae711ad7be79ef581edaa3a7e5dc22631f021e1f4072d61e2705eaa2f1e"} err="failed to get container status \"e0443ae711ad7be79ef581edaa3a7e5dc22631f021e1f4072d61e2705eaa2f1e\": rpc error: code = NotFound desc = could not find container \"e0443ae711ad7be79ef581edaa3a7e5dc22631f021e1f4072d61e2705eaa2f1e\": container with ID starting with e0443ae711ad7be79ef581edaa3a7e5dc22631f021e1f4072d61e2705eaa2f1e not found: ID does not exist" Jan 21 00:19:47 crc kubenswrapper[5118]: I0121 00:19:47.523303 5118 scope.go:117] "RemoveContainer" containerID="1d172206bbe6a689c32082da3fb77a7d564d2a522a9cc4b7d70193aebfc326a1" Jan 21 00:19:47 crc kubenswrapper[5118]: I0121 00:19:47.523573 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d172206bbe6a689c32082da3fb77a7d564d2a522a9cc4b7d70193aebfc326a1"} err="failed to get container status \"1d172206bbe6a689c32082da3fb77a7d564d2a522a9cc4b7d70193aebfc326a1\": rpc error: code = NotFound desc = could not find container \"1d172206bbe6a689c32082da3fb77a7d564d2a522a9cc4b7d70193aebfc326a1\": container with ID starting with 1d172206bbe6a689c32082da3fb77a7d564d2a522a9cc4b7d70193aebfc326a1 not found: ID does not exist" Jan 21 00:19:47 crc kubenswrapper[5118]: I0121 00:19:47.523592 5118 scope.go:117] "RemoveContainer" containerID="6cec5a44434a07153f1d3e9910d8fd3d9a2ec37767604df403adafe3ad6eaf2f" Jan 21 00:19:47 crc kubenswrapper[5118]: I0121 00:19:47.523843 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6cec5a44434a07153f1d3e9910d8fd3d9a2ec37767604df403adafe3ad6eaf2f"} err="failed to get container status \"6cec5a44434a07153f1d3e9910d8fd3d9a2ec37767604df403adafe3ad6eaf2f\": rpc error: code = NotFound desc = could not find container \"6cec5a44434a07153f1d3e9910d8fd3d9a2ec37767604df403adafe3ad6eaf2f\": container with ID starting with 6cec5a44434a07153f1d3e9910d8fd3d9a2ec37767604df403adafe3ad6eaf2f not found: ID does not exist" Jan 21 00:19:47 crc kubenswrapper[5118]: I0121 00:19:47.523867 5118 scope.go:117] "RemoveContainer" containerID="44654afa2c6f84f6cc69686c1499e120914b6742b725292990c497aa516ae5e3" Jan 21 00:19:47 crc kubenswrapper[5118]: I0121 00:19:47.524303 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44654afa2c6f84f6cc69686c1499e120914b6742b725292990c497aa516ae5e3"} err="failed to get container status \"44654afa2c6f84f6cc69686c1499e120914b6742b725292990c497aa516ae5e3\": rpc error: code = NotFound desc = could not find container \"44654afa2c6f84f6cc69686c1499e120914b6742b725292990c497aa516ae5e3\": container with ID starting with 44654afa2c6f84f6cc69686c1499e120914b6742b725292990c497aa516ae5e3 not found: ID does not exist" Jan 21 00:19:47 crc kubenswrapper[5118]: I0121 00:19:47.524328 5118 scope.go:117] "RemoveContainer" containerID="5bb9b3c5f6f84029c1079215b2b2e783858fbe9d0ffff553ce9c8357e054c961" Jan 21 00:19:47 crc kubenswrapper[5118]: I0121 00:19:47.524547 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5bb9b3c5f6f84029c1079215b2b2e783858fbe9d0ffff553ce9c8357e054c961"} err="failed to get container status \"5bb9b3c5f6f84029c1079215b2b2e783858fbe9d0ffff553ce9c8357e054c961\": rpc error: code = NotFound desc = could not find container \"5bb9b3c5f6f84029c1079215b2b2e783858fbe9d0ffff553ce9c8357e054c961\": container with ID starting with 5bb9b3c5f6f84029c1079215b2b2e783858fbe9d0ffff553ce9c8357e054c961 not found: ID does not exist" Jan 21 00:19:47 crc kubenswrapper[5118]: I0121 00:19:47.524568 5118 scope.go:117] "RemoveContainer" containerID="3752a9e4c7c738fecf3e21570f930846192609fc9b64b76e9895d30662f15407" Jan 21 00:19:47 crc kubenswrapper[5118]: I0121 00:19:47.524780 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3752a9e4c7c738fecf3e21570f930846192609fc9b64b76e9895d30662f15407"} err="failed to get container status \"3752a9e4c7c738fecf3e21570f930846192609fc9b64b76e9895d30662f15407\": rpc error: code = NotFound desc = could not find container \"3752a9e4c7c738fecf3e21570f930846192609fc9b64b76e9895d30662f15407\": container with ID starting with 3752a9e4c7c738fecf3e21570f930846192609fc9b64b76e9895d30662f15407 not found: ID does not exist" Jan 21 00:19:47 crc kubenswrapper[5118]: I0121 00:19:47.524801 5118 scope.go:117] "RemoveContainer" containerID="9fd0d8aa06981c2157fb923838a8a8cb7c4ad93337f136546214029480b3dc3e" Jan 21 00:19:47 crc kubenswrapper[5118]: I0121 00:19:47.525698 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fd0d8aa06981c2157fb923838a8a8cb7c4ad93337f136546214029480b3dc3e"} err="failed to get container status \"9fd0d8aa06981c2157fb923838a8a8cb7c4ad93337f136546214029480b3dc3e\": rpc error: code = NotFound desc = could not find container \"9fd0d8aa06981c2157fb923838a8a8cb7c4ad93337f136546214029480b3dc3e\": container with ID starting with 9fd0d8aa06981c2157fb923838a8a8cb7c4ad93337f136546214029480b3dc3e not found: ID does not exist" Jan 21 00:19:47 crc kubenswrapper[5118]: I0121 00:19:47.525723 5118 scope.go:117] "RemoveContainer" containerID="cd6b0e74901b77629ea9cf2a9f6ed14b9dadab5da55771b2c4329c542b5dde1a" Jan 21 00:19:47 crc kubenswrapper[5118]: I0121 00:19:47.525950 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd6b0e74901b77629ea9cf2a9f6ed14b9dadab5da55771b2c4329c542b5dde1a"} err="failed to get container status \"cd6b0e74901b77629ea9cf2a9f6ed14b9dadab5da55771b2c4329c542b5dde1a\": rpc error: code = NotFound desc = could not find container \"cd6b0e74901b77629ea9cf2a9f6ed14b9dadab5da55771b2c4329c542b5dde1a\": container with ID starting with cd6b0e74901b77629ea9cf2a9f6ed14b9dadab5da55771b2c4329c542b5dde1a not found: ID does not exist" Jan 21 00:19:47 crc kubenswrapper[5118]: I0121 00:19:47.526010 5118 scope.go:117] "RemoveContainer" containerID="47304a559d2c42615011fabfa78530d881d2f5b7e75c9f0d869341e77027c9d9" Jan 21 00:19:47 crc kubenswrapper[5118]: I0121 00:19:47.527647 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47304a559d2c42615011fabfa78530d881d2f5b7e75c9f0d869341e77027c9d9"} err="failed to get container status \"47304a559d2c42615011fabfa78530d881d2f5b7e75c9f0d869341e77027c9d9\": rpc error: code = NotFound desc = could not find container \"47304a559d2c42615011fabfa78530d881d2f5b7e75c9f0d869341e77027c9d9\": container with ID starting with 47304a559d2c42615011fabfa78530d881d2f5b7e75c9f0d869341e77027c9d9 not found: ID does not exist" Jan 21 00:19:47 crc kubenswrapper[5118]: I0121 00:19:47.527715 5118 scope.go:117] "RemoveContainer" containerID="e0443ae711ad7be79ef581edaa3a7e5dc22631f021e1f4072d61e2705eaa2f1e" Jan 21 00:19:47 crc kubenswrapper[5118]: I0121 00:19:47.532320 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0443ae711ad7be79ef581edaa3a7e5dc22631f021e1f4072d61e2705eaa2f1e"} err="failed to get container status \"e0443ae711ad7be79ef581edaa3a7e5dc22631f021e1f4072d61e2705eaa2f1e\": rpc error: code = NotFound desc = could not find container \"e0443ae711ad7be79ef581edaa3a7e5dc22631f021e1f4072d61e2705eaa2f1e\": container with ID starting with e0443ae711ad7be79ef581edaa3a7e5dc22631f021e1f4072d61e2705eaa2f1e not found: ID does not exist" Jan 21 00:19:47 crc kubenswrapper[5118]: I0121 00:19:47.532369 5118 scope.go:117] "RemoveContainer" containerID="1d172206bbe6a689c32082da3fb77a7d564d2a522a9cc4b7d70193aebfc326a1" Jan 21 00:19:47 crc kubenswrapper[5118]: I0121 00:19:47.532996 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d172206bbe6a689c32082da3fb77a7d564d2a522a9cc4b7d70193aebfc326a1"} err="failed to get container status \"1d172206bbe6a689c32082da3fb77a7d564d2a522a9cc4b7d70193aebfc326a1\": rpc error: code = NotFound desc = could not find container \"1d172206bbe6a689c32082da3fb77a7d564d2a522a9cc4b7d70193aebfc326a1\": container with ID starting with 1d172206bbe6a689c32082da3fb77a7d564d2a522a9cc4b7d70193aebfc326a1 not found: ID does not exist" Jan 21 00:19:47 crc kubenswrapper[5118]: I0121 00:19:47.533025 5118 scope.go:117] "RemoveContainer" containerID="6cec5a44434a07153f1d3e9910d8fd3d9a2ec37767604df403adafe3ad6eaf2f" Jan 21 00:19:47 crc kubenswrapper[5118]: I0121 00:19:47.533582 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6cec5a44434a07153f1d3e9910d8fd3d9a2ec37767604df403adafe3ad6eaf2f"} err="failed to get container status \"6cec5a44434a07153f1d3e9910d8fd3d9a2ec37767604df403adafe3ad6eaf2f\": rpc error: code = NotFound desc = could not find container \"6cec5a44434a07153f1d3e9910d8fd3d9a2ec37767604df403adafe3ad6eaf2f\": container with ID starting with 6cec5a44434a07153f1d3e9910d8fd3d9a2ec37767604df403adafe3ad6eaf2f not found: ID does not exist" Jan 21 00:19:47 crc kubenswrapper[5118]: I0121 00:19:47.533905 5118 scope.go:117] "RemoveContainer" containerID="44654afa2c6f84f6cc69686c1499e120914b6742b725292990c497aa516ae5e3" Jan 21 00:19:47 crc kubenswrapper[5118]: I0121 00:19:47.534292 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44654afa2c6f84f6cc69686c1499e120914b6742b725292990c497aa516ae5e3"} err="failed to get container status \"44654afa2c6f84f6cc69686c1499e120914b6742b725292990c497aa516ae5e3\": rpc error: code = NotFound desc = could not find container \"44654afa2c6f84f6cc69686c1499e120914b6742b725292990c497aa516ae5e3\": container with ID starting with 44654afa2c6f84f6cc69686c1499e120914b6742b725292990c497aa516ae5e3 not found: ID does not exist" Jan 21 00:19:47 crc kubenswrapper[5118]: I0121 00:19:47.534315 5118 scope.go:117] "RemoveContainer" containerID="5bb9b3c5f6f84029c1079215b2b2e783858fbe9d0ffff553ce9c8357e054c961" Jan 21 00:19:47 crc kubenswrapper[5118]: I0121 00:19:47.534636 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5bb9b3c5f6f84029c1079215b2b2e783858fbe9d0ffff553ce9c8357e054c961"} err="failed to get container status \"5bb9b3c5f6f84029c1079215b2b2e783858fbe9d0ffff553ce9c8357e054c961\": rpc error: code = NotFound desc = could not find container \"5bb9b3c5f6f84029c1079215b2b2e783858fbe9d0ffff553ce9c8357e054c961\": container with ID starting with 5bb9b3c5f6f84029c1079215b2b2e783858fbe9d0ffff553ce9c8357e054c961 not found: ID does not exist" Jan 21 00:19:47 crc kubenswrapper[5118]: I0121 00:19:47.534656 5118 scope.go:117] "RemoveContainer" containerID="3752a9e4c7c738fecf3e21570f930846192609fc9b64b76e9895d30662f15407" Jan 21 00:19:47 crc kubenswrapper[5118]: I0121 00:19:47.534965 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3752a9e4c7c738fecf3e21570f930846192609fc9b64b76e9895d30662f15407"} err="failed to get container status \"3752a9e4c7c738fecf3e21570f930846192609fc9b64b76e9895d30662f15407\": rpc error: code = NotFound desc = could not find container \"3752a9e4c7c738fecf3e21570f930846192609fc9b64b76e9895d30662f15407\": container with ID starting with 3752a9e4c7c738fecf3e21570f930846192609fc9b64b76e9895d30662f15407 not found: ID does not exist" Jan 21 00:19:47 crc kubenswrapper[5118]: I0121 00:19:47.534982 5118 scope.go:117] "RemoveContainer" containerID="9fd0d8aa06981c2157fb923838a8a8cb7c4ad93337f136546214029480b3dc3e" Jan 21 00:19:47 crc kubenswrapper[5118]: I0121 00:19:47.535217 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fd0d8aa06981c2157fb923838a8a8cb7c4ad93337f136546214029480b3dc3e"} err="failed to get container status \"9fd0d8aa06981c2157fb923838a8a8cb7c4ad93337f136546214029480b3dc3e\": rpc error: code = NotFound desc = could not find container \"9fd0d8aa06981c2157fb923838a8a8cb7c4ad93337f136546214029480b3dc3e\": container with ID starting with 9fd0d8aa06981c2157fb923838a8a8cb7c4ad93337f136546214029480b3dc3e not found: ID does not exist" Jan 21 00:19:48 crc kubenswrapper[5118]: I0121 00:19:48.377017 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-slcxb" event={"ID":"6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0","Type":"ContainerStarted","Data":"ef5026220dc5b3db0a69bfc9c26682a588c42296f9fda94bc3639df9b0c77ddf"} Jan 21 00:19:48 crc kubenswrapper[5118]: I0121 00:19:48.377064 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-slcxb" event={"ID":"6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0","Type":"ContainerStarted","Data":"d15ef6ac71b78df5557ab03682458eee4282e5088ca449fd0d17dc7a7c55e83f"} Jan 21 00:19:48 crc kubenswrapper[5118]: I0121 00:19:48.377083 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-slcxb" event={"ID":"6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0","Type":"ContainerStarted","Data":"95e300967837b3cca73877cf0096bfad10d95125254b118260b4e7d04c2ea457"} Jan 21 00:19:48 crc kubenswrapper[5118]: I0121 00:19:48.377098 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-slcxb" event={"ID":"6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0","Type":"ContainerStarted","Data":"f023e4777115d05960a17a1f4740ebe6684ba15f63879edd3992af6e32acf329"} Jan 21 00:19:48 crc kubenswrapper[5118]: I0121 00:19:48.377113 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-slcxb" event={"ID":"6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0","Type":"ContainerStarted","Data":"d7e9f185e722b677977fdf35a3f0d5b091e5e53bfad4dcb8fe8716485df44b4f"} Jan 21 00:19:48 crc kubenswrapper[5118]: I0121 00:19:48.377130 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-slcxb" event={"ID":"6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0","Type":"ContainerStarted","Data":"690882b80f235be33c31c79565b6a79faffac1db9cdd4a0576c9f7c3887a8848"} Jan 21 00:19:48 crc kubenswrapper[5118]: I0121 00:19:48.983042 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91e46657-55ca-43e7-9a43-6bb875c7debf" path="/var/lib/kubelet/pods/91e46657-55ca-43e7-9a43-6bb875c7debf/volumes" Jan 21 00:19:51 crc kubenswrapper[5118]: I0121 00:19:51.400359 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-slcxb" event={"ID":"6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0","Type":"ContainerStarted","Data":"507c6db02aff840ce687a09bcbe9b5227ff12ff26586cb7534edee4b881c03e3"} Jan 21 00:19:52 crc kubenswrapper[5118]: I0121 00:19:52.411614 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-slcxb" event={"ID":"6b0c24bf-21c3-43a2-b859-8bd6b31e6ad0","Type":"ContainerStarted","Data":"a64c47cca78447adb9899a337b5fd7b254e9d1eaaff9b62f7da7ef68dcc65736"} Jan 21 00:19:52 crc kubenswrapper[5118]: I0121 00:19:52.412219 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-slcxb" Jan 21 00:19:52 crc kubenswrapper[5118]: I0121 00:19:52.412274 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-slcxb" Jan 21 00:19:52 crc kubenswrapper[5118]: I0121 00:19:52.413370 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-slcxb" Jan 21 00:19:52 crc kubenswrapper[5118]: I0121 00:19:52.440892 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-slcxb" Jan 21 00:19:52 crc kubenswrapper[5118]: I0121 00:19:52.443123 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-slcxb" Jan 21 00:19:52 crc kubenswrapper[5118]: I0121 00:19:52.450288 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-slcxb" podStartSLOduration=6.450265435 podStartE2EDuration="6.450265435s" podCreationTimestamp="2026-01-21 00:19:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:19:52.446903085 +0000 UTC m=+647.771150123" watchObservedRunningTime="2026-01-21 00:19:52.450265435 +0000 UTC m=+647.774512453" Jan 21 00:20:00 crc kubenswrapper[5118]: I0121 00:20:00.140353 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29482580-tdx44"] Jan 21 00:20:00 crc kubenswrapper[5118]: I0121 00:20:00.681649 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29482580-tdx44"] Jan 21 00:20:00 crc kubenswrapper[5118]: I0121 00:20:00.681773 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482580-tdx44" Jan 21 00:20:00 crc kubenswrapper[5118]: I0121 00:20:00.684898 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-tk7qc\"" Jan 21 00:20:00 crc kubenswrapper[5118]: I0121 00:20:00.685604 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 00:20:00 crc kubenswrapper[5118]: I0121 00:20:00.686994 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 00:20:00 crc kubenswrapper[5118]: I0121 00:20:00.756245 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f68gv\" (UniqueName: \"kubernetes.io/projected/b6bed478-86fa-4a4e-a75e-02a576884ad1-kube-api-access-f68gv\") pod \"auto-csr-approver-29482580-tdx44\" (UID: \"b6bed478-86fa-4a4e-a75e-02a576884ad1\") " pod="openshift-infra/auto-csr-approver-29482580-tdx44" Jan 21 00:20:00 crc kubenswrapper[5118]: I0121 00:20:00.857477 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-f68gv\" (UniqueName: \"kubernetes.io/projected/b6bed478-86fa-4a4e-a75e-02a576884ad1-kube-api-access-f68gv\") pod \"auto-csr-approver-29482580-tdx44\" (UID: \"b6bed478-86fa-4a4e-a75e-02a576884ad1\") " pod="openshift-infra/auto-csr-approver-29482580-tdx44" Jan 21 00:20:00 crc kubenswrapper[5118]: I0121 00:20:00.881624 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-f68gv\" (UniqueName: \"kubernetes.io/projected/b6bed478-86fa-4a4e-a75e-02a576884ad1-kube-api-access-f68gv\") pod \"auto-csr-approver-29482580-tdx44\" (UID: \"b6bed478-86fa-4a4e-a75e-02a576884ad1\") " pod="openshift-infra/auto-csr-approver-29482580-tdx44" Jan 21 00:20:01 crc kubenswrapper[5118]: I0121 00:20:01.006427 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482580-tdx44" Jan 21 00:20:01 crc kubenswrapper[5118]: I0121 00:20:01.212964 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29482580-tdx44"] Jan 21 00:20:01 crc kubenswrapper[5118]: W0121 00:20:01.216895 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb6bed478_86fa_4a4e_a75e_02a576884ad1.slice/crio-6f094192125bd92e7227fce96cf3b4bf87611136da8fc107096d4471ca21a59d WatchSource:0}: Error finding container 6f094192125bd92e7227fce96cf3b4bf87611136da8fc107096d4471ca21a59d: Status 404 returned error can't find the container with id 6f094192125bd92e7227fce96cf3b4bf87611136da8fc107096d4471ca21a59d Jan 21 00:20:01 crc kubenswrapper[5118]: I0121 00:20:01.462492 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29482580-tdx44" event={"ID":"b6bed478-86fa-4a4e-a75e-02a576884ad1","Type":"ContainerStarted","Data":"6f094192125bd92e7227fce96cf3b4bf87611136da8fc107096d4471ca21a59d"} Jan 21 00:20:03 crc kubenswrapper[5118]: I0121 00:20:03.520316 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29482580-tdx44" event={"ID":"b6bed478-86fa-4a4e-a75e-02a576884ad1","Type":"ContainerStarted","Data":"37310520918d79172457c9b32c6b915c3a0f193abcc41c48a4cac8d81c85d580"} Jan 21 00:20:03 crc kubenswrapper[5118]: I0121 00:20:03.532751 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29482580-tdx44" podStartSLOduration=1.6844725390000002 podStartE2EDuration="3.532727589s" podCreationTimestamp="2026-01-21 00:20:00 +0000 UTC" firstStartedPulling="2026-01-21 00:20:01.218720846 +0000 UTC m=+656.542967864" lastFinishedPulling="2026-01-21 00:20:03.066975896 +0000 UTC m=+658.391222914" observedRunningTime="2026-01-21 00:20:03.529663188 +0000 UTC m=+658.853910226" watchObservedRunningTime="2026-01-21 00:20:03.532727589 +0000 UTC m=+658.856974627" Jan 21 00:20:03 crc kubenswrapper[5118]: I0121 00:20:03.800908 5118 patch_prober.go:28] interesting pod/machine-config-daemon-22r9n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 00:20:03 crc kubenswrapper[5118]: I0121 00:20:03.801033 5118 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 00:20:04 crc kubenswrapper[5118]: I0121 00:20:04.527509 5118 generic.go:358] "Generic (PLEG): container finished" podID="b6bed478-86fa-4a4e-a75e-02a576884ad1" containerID="37310520918d79172457c9b32c6b915c3a0f193abcc41c48a4cac8d81c85d580" exitCode=0 Jan 21 00:20:04 crc kubenswrapper[5118]: I0121 00:20:04.527582 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29482580-tdx44" event={"ID":"b6bed478-86fa-4a4e-a75e-02a576884ad1","Type":"ContainerDied","Data":"37310520918d79172457c9b32c6b915c3a0f193abcc41c48a4cac8d81c85d580"} Jan 21 00:20:05 crc kubenswrapper[5118]: I0121 00:20:05.730176 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482580-tdx44" Jan 21 00:20:05 crc kubenswrapper[5118]: I0121 00:20:05.846826 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f68gv\" (UniqueName: \"kubernetes.io/projected/b6bed478-86fa-4a4e-a75e-02a576884ad1-kube-api-access-f68gv\") pod \"b6bed478-86fa-4a4e-a75e-02a576884ad1\" (UID: \"b6bed478-86fa-4a4e-a75e-02a576884ad1\") " Jan 21 00:20:05 crc kubenswrapper[5118]: I0121 00:20:05.853702 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6bed478-86fa-4a4e-a75e-02a576884ad1-kube-api-access-f68gv" (OuterVolumeSpecName: "kube-api-access-f68gv") pod "b6bed478-86fa-4a4e-a75e-02a576884ad1" (UID: "b6bed478-86fa-4a4e-a75e-02a576884ad1"). InnerVolumeSpecName "kube-api-access-f68gv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:20:05 crc kubenswrapper[5118]: I0121 00:20:05.948194 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-f68gv\" (UniqueName: \"kubernetes.io/projected/b6bed478-86fa-4a4e-a75e-02a576884ad1-kube-api-access-f68gv\") on node \"crc\" DevicePath \"\"" Jan 21 00:20:06 crc kubenswrapper[5118]: I0121 00:20:06.546023 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482580-tdx44" Jan 21 00:20:06 crc kubenswrapper[5118]: I0121 00:20:06.546042 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29482580-tdx44" event={"ID":"b6bed478-86fa-4a4e-a75e-02a576884ad1","Type":"ContainerDied","Data":"6f094192125bd92e7227fce96cf3b4bf87611136da8fc107096d4471ca21a59d"} Jan 21 00:20:06 crc kubenswrapper[5118]: I0121 00:20:06.546085 5118 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f094192125bd92e7227fce96cf3b4bf87611136da8fc107096d4471ca21a59d" Jan 21 00:20:24 crc kubenswrapper[5118]: I0121 00:20:24.447338 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-slcxb" Jan 21 00:20:33 crc kubenswrapper[5118]: I0121 00:20:33.801592 5118 patch_prober.go:28] interesting pod/machine-config-daemon-22r9n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 00:20:33 crc kubenswrapper[5118]: I0121 00:20:33.802140 5118 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 00:20:57 crc kubenswrapper[5118]: I0121 00:20:57.844136 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6wj6k"] Jan 21 00:20:57 crc kubenswrapper[5118]: I0121 00:20:57.845083 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6wj6k" podUID="8a36e2d2-0658-478e-8105-459a04d0234b" containerName="registry-server" containerID="cri-o://47af2c0ea7edda57758882a7b2d34ac9b9b2daa86db50b1fe30a2371e5339aac" gracePeriod=30 Jan 21 00:20:57 crc kubenswrapper[5118]: I0121 00:20:57.875766 5118 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-marketplace-6wj6k" podUID="8a36e2d2-0658-478e-8105-459a04d0234b" containerName="registry-server" probeResult="failure" output="" Jan 21 00:20:58 crc kubenswrapper[5118]: E0121 00:20:58.439801 5118 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 47af2c0ea7edda57758882a7b2d34ac9b9b2daa86db50b1fe30a2371e5339aac is running failed: container process not found" containerID="47af2c0ea7edda57758882a7b2d34ac9b9b2daa86db50b1fe30a2371e5339aac" cmd=["grpc_health_probe","-addr=:50051"] Jan 21 00:20:58 crc kubenswrapper[5118]: E0121 00:20:58.440183 5118 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 47af2c0ea7edda57758882a7b2d34ac9b9b2daa86db50b1fe30a2371e5339aac is running failed: container process not found" containerID="47af2c0ea7edda57758882a7b2d34ac9b9b2daa86db50b1fe30a2371e5339aac" cmd=["grpc_health_probe","-addr=:50051"] Jan 21 00:20:58 crc kubenswrapper[5118]: E0121 00:20:58.440568 5118 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 47af2c0ea7edda57758882a7b2d34ac9b9b2daa86db50b1fe30a2371e5339aac is running failed: container process not found" containerID="47af2c0ea7edda57758882a7b2d34ac9b9b2daa86db50b1fe30a2371e5339aac" cmd=["grpc_health_probe","-addr=:50051"] Jan 21 00:20:58 crc kubenswrapper[5118]: E0121 00:20:58.440605 5118 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 47af2c0ea7edda57758882a7b2d34ac9b9b2daa86db50b1fe30a2371e5339aac is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-6wj6k" podUID="8a36e2d2-0658-478e-8105-459a04d0234b" containerName="registry-server" probeResult="unknown" Jan 21 00:20:59 crc kubenswrapper[5118]: I0121 00:20:59.968532 5118 generic.go:358] "Generic (PLEG): container finished" podID="8a36e2d2-0658-478e-8105-459a04d0234b" containerID="47af2c0ea7edda57758882a7b2d34ac9b9b2daa86db50b1fe30a2371e5339aac" exitCode=0 Jan 21 00:20:59 crc kubenswrapper[5118]: I0121 00:20:59.968755 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6wj6k" event={"ID":"8a36e2d2-0658-478e-8105-459a04d0234b","Type":"ContainerDied","Data":"47af2c0ea7edda57758882a7b2d34ac9b9b2daa86db50b1fe30a2371e5339aac"} Jan 21 00:21:00 crc kubenswrapper[5118]: I0121 00:21:00.106653 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6wj6k" Jan 21 00:21:00 crc kubenswrapper[5118]: I0121 00:21:00.152951 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a36e2d2-0658-478e-8105-459a04d0234b-catalog-content\") pod \"8a36e2d2-0658-478e-8105-459a04d0234b\" (UID: \"8a36e2d2-0658-478e-8105-459a04d0234b\") " Jan 21 00:21:00 crc kubenswrapper[5118]: I0121 00:21:00.153049 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cczlt\" (UniqueName: \"kubernetes.io/projected/8a36e2d2-0658-478e-8105-459a04d0234b-kube-api-access-cczlt\") pod \"8a36e2d2-0658-478e-8105-459a04d0234b\" (UID: \"8a36e2d2-0658-478e-8105-459a04d0234b\") " Jan 21 00:21:00 crc kubenswrapper[5118]: I0121 00:21:00.153074 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a36e2d2-0658-478e-8105-459a04d0234b-utilities\") pod \"8a36e2d2-0658-478e-8105-459a04d0234b\" (UID: \"8a36e2d2-0658-478e-8105-459a04d0234b\") " Jan 21 00:21:00 crc kubenswrapper[5118]: I0121 00:21:00.154377 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a36e2d2-0658-478e-8105-459a04d0234b-utilities" (OuterVolumeSpecName: "utilities") pod "8a36e2d2-0658-478e-8105-459a04d0234b" (UID: "8a36e2d2-0658-478e-8105-459a04d0234b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 00:21:00 crc kubenswrapper[5118]: I0121 00:21:00.158946 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a36e2d2-0658-478e-8105-459a04d0234b-kube-api-access-cczlt" (OuterVolumeSpecName: "kube-api-access-cczlt") pod "8a36e2d2-0658-478e-8105-459a04d0234b" (UID: "8a36e2d2-0658-478e-8105-459a04d0234b"). InnerVolumeSpecName "kube-api-access-cczlt". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:21:00 crc kubenswrapper[5118]: I0121 00:21:00.165285 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a36e2d2-0658-478e-8105-459a04d0234b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8a36e2d2-0658-478e-8105-459a04d0234b" (UID: "8a36e2d2-0658-478e-8105-459a04d0234b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 00:21:00 crc kubenswrapper[5118]: I0121 00:21:00.254607 5118 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a36e2d2-0658-478e-8105-459a04d0234b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 00:21:00 crc kubenswrapper[5118]: I0121 00:21:00.254650 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-cczlt\" (UniqueName: \"kubernetes.io/projected/8a36e2d2-0658-478e-8105-459a04d0234b-kube-api-access-cczlt\") on node \"crc\" DevicePath \"\"" Jan 21 00:21:00 crc kubenswrapper[5118]: I0121 00:21:00.254664 5118 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a36e2d2-0658-478e-8105-459a04d0234b-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 00:21:00 crc kubenswrapper[5118]: I0121 00:21:00.977251 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6wj6k" Jan 21 00:21:00 crc kubenswrapper[5118]: I0121 00:21:00.983192 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6wj6k" event={"ID":"8a36e2d2-0658-478e-8105-459a04d0234b","Type":"ContainerDied","Data":"3afc36bca688b1476e0e6c82406fadcc89d9bec322267395c0d8f1faf195bbe0"} Jan 21 00:21:00 crc kubenswrapper[5118]: I0121 00:21:00.983244 5118 scope.go:117] "RemoveContainer" containerID="47af2c0ea7edda57758882a7b2d34ac9b9b2daa86db50b1fe30a2371e5339aac" Jan 21 00:21:01 crc kubenswrapper[5118]: I0121 00:21:01.001920 5118 scope.go:117] "RemoveContainer" containerID="bb6afa2612c81fb855c871f0ed3652f6e063afcaa8a951f8836db06626752119" Jan 21 00:21:01 crc kubenswrapper[5118]: I0121 00:21:01.008553 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6wj6k"] Jan 21 00:21:01 crc kubenswrapper[5118]: I0121 00:21:01.014145 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6wj6k"] Jan 21 00:21:01 crc kubenswrapper[5118]: I0121 00:21:01.033947 5118 scope.go:117] "RemoveContainer" containerID="3c936dd8d7c4f279d12c850389e1c638a6a724c849a5e64bfb09c8ab7dcb2e21" Jan 21 00:21:01 crc kubenswrapper[5118]: I0121 00:21:01.620535 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08g8vzk"] Jan 21 00:21:01 crc kubenswrapper[5118]: I0121 00:21:01.621202 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8a36e2d2-0658-478e-8105-459a04d0234b" containerName="registry-server" Jan 21 00:21:01 crc kubenswrapper[5118]: I0121 00:21:01.621225 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a36e2d2-0658-478e-8105-459a04d0234b" containerName="registry-server" Jan 21 00:21:01 crc kubenswrapper[5118]: I0121 00:21:01.621250 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8a36e2d2-0658-478e-8105-459a04d0234b" containerName="extract-utilities" Jan 21 00:21:01 crc kubenswrapper[5118]: I0121 00:21:01.621258 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a36e2d2-0658-478e-8105-459a04d0234b" containerName="extract-utilities" Jan 21 00:21:01 crc kubenswrapper[5118]: I0121 00:21:01.621267 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8a36e2d2-0658-478e-8105-459a04d0234b" containerName="extract-content" Jan 21 00:21:01 crc kubenswrapper[5118]: I0121 00:21:01.621277 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a36e2d2-0658-478e-8105-459a04d0234b" containerName="extract-content" Jan 21 00:21:01 crc kubenswrapper[5118]: I0121 00:21:01.621295 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b6bed478-86fa-4a4e-a75e-02a576884ad1" containerName="oc" Jan 21 00:21:01 crc kubenswrapper[5118]: I0121 00:21:01.621303 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6bed478-86fa-4a4e-a75e-02a576884ad1" containerName="oc" Jan 21 00:21:01 crc kubenswrapper[5118]: I0121 00:21:01.621425 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="8a36e2d2-0658-478e-8105-459a04d0234b" containerName="registry-server" Jan 21 00:21:01 crc kubenswrapper[5118]: I0121 00:21:01.621438 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="b6bed478-86fa-4a4e-a75e-02a576884ad1" containerName="oc" Jan 21 00:21:02 crc kubenswrapper[5118]: I0121 00:21:02.057869 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08g8vzk"] Jan 21 00:21:02 crc kubenswrapper[5118]: I0121 00:21:02.058012 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08g8vzk" Jan 21 00:21:02 crc kubenswrapper[5118]: I0121 00:21:02.062871 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-b2ccr\"" Jan 21 00:21:02 crc kubenswrapper[5118]: I0121 00:21:02.081940 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5435bf24-0656-4edc-aa9a-9475d2cb648d-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08g8vzk\" (UID: \"5435bf24-0656-4edc-aa9a-9475d2cb648d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08g8vzk" Jan 21 00:21:02 crc kubenswrapper[5118]: I0121 00:21:02.082749 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4fww\" (UniqueName: \"kubernetes.io/projected/5435bf24-0656-4edc-aa9a-9475d2cb648d-kube-api-access-z4fww\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08g8vzk\" (UID: \"5435bf24-0656-4edc-aa9a-9475d2cb648d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08g8vzk" Jan 21 00:21:02 crc kubenswrapper[5118]: I0121 00:21:02.082818 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5435bf24-0656-4edc-aa9a-9475d2cb648d-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08g8vzk\" (UID: \"5435bf24-0656-4edc-aa9a-9475d2cb648d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08g8vzk" Jan 21 00:21:02 crc kubenswrapper[5118]: I0121 00:21:02.184426 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-z4fww\" (UniqueName: \"kubernetes.io/projected/5435bf24-0656-4edc-aa9a-9475d2cb648d-kube-api-access-z4fww\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08g8vzk\" (UID: \"5435bf24-0656-4edc-aa9a-9475d2cb648d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08g8vzk" Jan 21 00:21:02 crc kubenswrapper[5118]: I0121 00:21:02.184512 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5435bf24-0656-4edc-aa9a-9475d2cb648d-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08g8vzk\" (UID: \"5435bf24-0656-4edc-aa9a-9475d2cb648d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08g8vzk" Jan 21 00:21:02 crc kubenswrapper[5118]: I0121 00:21:02.184597 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5435bf24-0656-4edc-aa9a-9475d2cb648d-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08g8vzk\" (UID: \"5435bf24-0656-4edc-aa9a-9475d2cb648d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08g8vzk" Jan 21 00:21:02 crc kubenswrapper[5118]: I0121 00:21:02.185752 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5435bf24-0656-4edc-aa9a-9475d2cb648d-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08g8vzk\" (UID: \"5435bf24-0656-4edc-aa9a-9475d2cb648d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08g8vzk" Jan 21 00:21:02 crc kubenswrapper[5118]: I0121 00:21:02.185989 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5435bf24-0656-4edc-aa9a-9475d2cb648d-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08g8vzk\" (UID: \"5435bf24-0656-4edc-aa9a-9475d2cb648d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08g8vzk" Jan 21 00:21:02 crc kubenswrapper[5118]: I0121 00:21:02.203636 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4fww\" (UniqueName: \"kubernetes.io/projected/5435bf24-0656-4edc-aa9a-9475d2cb648d-kube-api-access-z4fww\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08g8vzk\" (UID: \"5435bf24-0656-4edc-aa9a-9475d2cb648d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08g8vzk" Jan 21 00:21:02 crc kubenswrapper[5118]: I0121 00:21:02.382571 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08g8vzk" Jan 21 00:21:02 crc kubenswrapper[5118]: I0121 00:21:02.555038 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08g8vzk"] Jan 21 00:21:02 crc kubenswrapper[5118]: I0121 00:21:02.983129 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a36e2d2-0658-478e-8105-459a04d0234b" path="/var/lib/kubelet/pods/8a36e2d2-0658-478e-8105-459a04d0234b/volumes" Jan 21 00:21:03 crc kubenswrapper[5118]: I0121 00:21:03.006240 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08g8vzk" event={"ID":"5435bf24-0656-4edc-aa9a-9475d2cb648d","Type":"ContainerStarted","Data":"9cc037d4c87ea64abfb6bf631ec36316ec38a50d24e862634bd715cca8bd9b34"} Jan 21 00:21:03 crc kubenswrapper[5118]: I0121 00:21:03.006289 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08g8vzk" event={"ID":"5435bf24-0656-4edc-aa9a-9475d2cb648d","Type":"ContainerStarted","Data":"4f38de2baeb611999da63e73386ab93812429ae42403ca6396806347302393f7"} Jan 21 00:21:03 crc kubenswrapper[5118]: I0121 00:21:03.801005 5118 patch_prober.go:28] interesting pod/machine-config-daemon-22r9n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 00:21:03 crc kubenswrapper[5118]: I0121 00:21:03.801083 5118 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 00:21:03 crc kubenswrapper[5118]: I0121 00:21:03.801131 5118 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" Jan 21 00:21:03 crc kubenswrapper[5118]: I0121 00:21:03.802365 5118 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7922e95afa9e80095c69f7b0a751dd320865224ec2831af4c9a2dcde9659cd54"} pod="openshift-machine-config-operator/machine-config-daemon-22r9n" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 00:21:03 crc kubenswrapper[5118]: I0121 00:21:03.802571 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" containerName="machine-config-daemon" containerID="cri-o://7922e95afa9e80095c69f7b0a751dd320865224ec2831af4c9a2dcde9659cd54" gracePeriod=600 Jan 21 00:21:04 crc kubenswrapper[5118]: I0121 00:21:04.013665 5118 generic.go:358] "Generic (PLEG): container finished" podID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" containerID="7922e95afa9e80095c69f7b0a751dd320865224ec2831af4c9a2dcde9659cd54" exitCode=0 Jan 21 00:21:04 crc kubenswrapper[5118]: I0121 00:21:04.014019 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" event={"ID":"44eb9bc7-60a3-421c-bf5e-d1d9a5026435","Type":"ContainerDied","Data":"7922e95afa9e80095c69f7b0a751dd320865224ec2831af4c9a2dcde9659cd54"} Jan 21 00:21:04 crc kubenswrapper[5118]: I0121 00:21:04.014049 5118 scope.go:117] "RemoveContainer" containerID="92f94cff427bbfd2ea80a4772b8465005fc945125ca4b7e3c490d52f65cdb761" Jan 21 00:21:04 crc kubenswrapper[5118]: I0121 00:21:04.015688 5118 generic.go:358] "Generic (PLEG): container finished" podID="5435bf24-0656-4edc-aa9a-9475d2cb648d" containerID="9cc037d4c87ea64abfb6bf631ec36316ec38a50d24e862634bd715cca8bd9b34" exitCode=0 Jan 21 00:21:04 crc kubenswrapper[5118]: I0121 00:21:04.015771 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08g8vzk" event={"ID":"5435bf24-0656-4edc-aa9a-9475d2cb648d","Type":"ContainerDied","Data":"9cc037d4c87ea64abfb6bf631ec36316ec38a50d24e862634bd715cca8bd9b34"} Jan 21 00:21:05 crc kubenswrapper[5118]: I0121 00:21:05.022272 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" event={"ID":"44eb9bc7-60a3-421c-bf5e-d1d9a5026435","Type":"ContainerStarted","Data":"f02b486c7f526cea45f0bc8e93498ac542cc749b10fbe7b2dc9e854f825b1f31"} Jan 21 00:21:06 crc kubenswrapper[5118]: I0121 00:21:06.028098 5118 generic.go:358] "Generic (PLEG): container finished" podID="5435bf24-0656-4edc-aa9a-9475d2cb648d" containerID="585fbb105ac504df5642907d1c4384c720a0cafa6b249b85d0dd175f5a781a3f" exitCode=0 Jan 21 00:21:06 crc kubenswrapper[5118]: I0121 00:21:06.028147 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08g8vzk" event={"ID":"5435bf24-0656-4edc-aa9a-9475d2cb648d","Type":"ContainerDied","Data":"585fbb105ac504df5642907d1c4384c720a0cafa6b249b85d0dd175f5a781a3f"} Jan 21 00:21:07 crc kubenswrapper[5118]: I0121 00:21:07.035488 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08g8vzk" event={"ID":"5435bf24-0656-4edc-aa9a-9475d2cb648d","Type":"ContainerDied","Data":"a553189a1199dab319c257b2fe22ba1401fb38028e9626a7c8f14a0c58414419"} Jan 21 00:21:07 crc kubenswrapper[5118]: I0121 00:21:07.035489 5118 generic.go:358] "Generic (PLEG): container finished" podID="5435bf24-0656-4edc-aa9a-9475d2cb648d" containerID="a553189a1199dab319c257b2fe22ba1401fb38028e9626a7c8f14a0c58414419" exitCode=0 Jan 21 00:21:08 crc kubenswrapper[5118]: I0121 00:21:08.268214 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08g8vzk" Jan 21 00:21:08 crc kubenswrapper[5118]: I0121 00:21:08.422053 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5435bf24-0656-4edc-aa9a-9475d2cb648d-bundle\") pod \"5435bf24-0656-4edc-aa9a-9475d2cb648d\" (UID: \"5435bf24-0656-4edc-aa9a-9475d2cb648d\") " Jan 21 00:21:08 crc kubenswrapper[5118]: I0121 00:21:08.422112 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5435bf24-0656-4edc-aa9a-9475d2cb648d-util\") pod \"5435bf24-0656-4edc-aa9a-9475d2cb648d\" (UID: \"5435bf24-0656-4edc-aa9a-9475d2cb648d\") " Jan 21 00:21:08 crc kubenswrapper[5118]: I0121 00:21:08.422183 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z4fww\" (UniqueName: \"kubernetes.io/projected/5435bf24-0656-4edc-aa9a-9475d2cb648d-kube-api-access-z4fww\") pod \"5435bf24-0656-4edc-aa9a-9475d2cb648d\" (UID: \"5435bf24-0656-4edc-aa9a-9475d2cb648d\") " Jan 21 00:21:08 crc kubenswrapper[5118]: I0121 00:21:08.427801 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5435bf24-0656-4edc-aa9a-9475d2cb648d-kube-api-access-z4fww" (OuterVolumeSpecName: "kube-api-access-z4fww") pod "5435bf24-0656-4edc-aa9a-9475d2cb648d" (UID: "5435bf24-0656-4edc-aa9a-9475d2cb648d"). InnerVolumeSpecName "kube-api-access-z4fww". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:21:08 crc kubenswrapper[5118]: I0121 00:21:08.433382 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5435bf24-0656-4edc-aa9a-9475d2cb648d-util" (OuterVolumeSpecName: "util") pod "5435bf24-0656-4edc-aa9a-9475d2cb648d" (UID: "5435bf24-0656-4edc-aa9a-9475d2cb648d"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 00:21:08 crc kubenswrapper[5118]: I0121 00:21:08.438618 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5435bf24-0656-4edc-aa9a-9475d2cb648d-bundle" (OuterVolumeSpecName: "bundle") pod "5435bf24-0656-4edc-aa9a-9475d2cb648d" (UID: "5435bf24-0656-4edc-aa9a-9475d2cb648d"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 00:21:08 crc kubenswrapper[5118]: I0121 00:21:08.523827 5118 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5435bf24-0656-4edc-aa9a-9475d2cb648d-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 00:21:08 crc kubenswrapper[5118]: I0121 00:21:08.523869 5118 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5435bf24-0656-4edc-aa9a-9475d2cb648d-util\") on node \"crc\" DevicePath \"\"" Jan 21 00:21:08 crc kubenswrapper[5118]: I0121 00:21:08.523882 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z4fww\" (UniqueName: \"kubernetes.io/projected/5435bf24-0656-4edc-aa9a-9475d2cb648d-kube-api-access-z4fww\") on node \"crc\" DevicePath \"\"" Jan 21 00:21:09 crc kubenswrapper[5118]: I0121 00:21:09.051692 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08g8vzk" event={"ID":"5435bf24-0656-4edc-aa9a-9475d2cb648d","Type":"ContainerDied","Data":"4f38de2baeb611999da63e73386ab93812429ae42403ca6396806347302393f7"} Jan 21 00:21:09 crc kubenswrapper[5118]: I0121 00:21:09.051734 5118 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f38de2baeb611999da63e73386ab93812429ae42403ca6396806347302393f7" Jan 21 00:21:09 crc kubenswrapper[5118]: I0121 00:21:09.052115 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08g8vzk" Jan 21 00:21:09 crc kubenswrapper[5118]: I0121 00:21:09.815814 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e698gt"] Jan 21 00:21:09 crc kubenswrapper[5118]: I0121 00:21:09.816462 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5435bf24-0656-4edc-aa9a-9475d2cb648d" containerName="util" Jan 21 00:21:09 crc kubenswrapper[5118]: I0121 00:21:09.816478 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="5435bf24-0656-4edc-aa9a-9475d2cb648d" containerName="util" Jan 21 00:21:09 crc kubenswrapper[5118]: I0121 00:21:09.816495 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5435bf24-0656-4edc-aa9a-9475d2cb648d" containerName="pull" Jan 21 00:21:09 crc kubenswrapper[5118]: I0121 00:21:09.816502 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="5435bf24-0656-4edc-aa9a-9475d2cb648d" containerName="pull" Jan 21 00:21:09 crc kubenswrapper[5118]: I0121 00:21:09.816513 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5435bf24-0656-4edc-aa9a-9475d2cb648d" containerName="extract" Jan 21 00:21:09 crc kubenswrapper[5118]: I0121 00:21:09.816519 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="5435bf24-0656-4edc-aa9a-9475d2cb648d" containerName="extract" Jan 21 00:21:09 crc kubenswrapper[5118]: I0121 00:21:09.816614 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="5435bf24-0656-4edc-aa9a-9475d2cb648d" containerName="extract" Jan 21 00:21:09 crc kubenswrapper[5118]: I0121 00:21:09.827410 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e698gt"] Jan 21 00:21:09 crc kubenswrapper[5118]: I0121 00:21:09.827548 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e698gt" Jan 21 00:21:09 crc kubenswrapper[5118]: I0121 00:21:09.829945 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-b2ccr\"" Jan 21 00:21:09 crc kubenswrapper[5118]: I0121 00:21:09.939466 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1d7805d7-a48a-488e-9f51-715cd1e444bf-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e698gt\" (UID: \"1d7805d7-a48a-488e-9f51-715cd1e444bf\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e698gt" Jan 21 00:21:09 crc kubenswrapper[5118]: I0121 00:21:09.939549 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1d7805d7-a48a-488e-9f51-715cd1e444bf-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e698gt\" (UID: \"1d7805d7-a48a-488e-9f51-715cd1e444bf\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e698gt" Jan 21 00:21:09 crc kubenswrapper[5118]: I0121 00:21:09.939709 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d66m4\" (UniqueName: \"kubernetes.io/projected/1d7805d7-a48a-488e-9f51-715cd1e444bf-kube-api-access-d66m4\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e698gt\" (UID: \"1d7805d7-a48a-488e-9f51-715cd1e444bf\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e698gt" Jan 21 00:21:10 crc kubenswrapper[5118]: I0121 00:21:10.040764 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1d7805d7-a48a-488e-9f51-715cd1e444bf-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e698gt\" (UID: \"1d7805d7-a48a-488e-9f51-715cd1e444bf\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e698gt" Jan 21 00:21:10 crc kubenswrapper[5118]: I0121 00:21:10.040864 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-d66m4\" (UniqueName: \"kubernetes.io/projected/1d7805d7-a48a-488e-9f51-715cd1e444bf-kube-api-access-d66m4\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e698gt\" (UID: \"1d7805d7-a48a-488e-9f51-715cd1e444bf\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e698gt" Jan 21 00:21:10 crc kubenswrapper[5118]: I0121 00:21:10.040909 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1d7805d7-a48a-488e-9f51-715cd1e444bf-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e698gt\" (UID: \"1d7805d7-a48a-488e-9f51-715cd1e444bf\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e698gt" Jan 21 00:21:10 crc kubenswrapper[5118]: I0121 00:21:10.041297 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1d7805d7-a48a-488e-9f51-715cd1e444bf-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e698gt\" (UID: \"1d7805d7-a48a-488e-9f51-715cd1e444bf\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e698gt" Jan 21 00:21:10 crc kubenswrapper[5118]: I0121 00:21:10.041339 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1d7805d7-a48a-488e-9f51-715cd1e444bf-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e698gt\" (UID: \"1d7805d7-a48a-488e-9f51-715cd1e444bf\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e698gt" Jan 21 00:21:10 crc kubenswrapper[5118]: I0121 00:21:10.063442 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-d66m4\" (UniqueName: \"kubernetes.io/projected/1d7805d7-a48a-488e-9f51-715cd1e444bf-kube-api-access-d66m4\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e698gt\" (UID: \"1d7805d7-a48a-488e-9f51-715cd1e444bf\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e698gt" Jan 21 00:21:10 crc kubenswrapper[5118]: I0121 00:21:10.149457 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e698gt" Jan 21 00:21:10 crc kubenswrapper[5118]: I0121 00:21:10.546962 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e698gt"] Jan 21 00:21:10 crc kubenswrapper[5118]: W0121 00:21:10.551502 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d7805d7_a48a_488e_9f51_715cd1e444bf.slice/crio-b2782c39ee6a70be073943a3b4db87abf984a952d30d4669c96c47083eef69a6 WatchSource:0}: Error finding container b2782c39ee6a70be073943a3b4db87abf984a952d30d4669c96c47083eef69a6: Status 404 returned error can't find the container with id b2782c39ee6a70be073943a3b4db87abf984a952d30d4669c96c47083eef69a6 Jan 21 00:21:10 crc kubenswrapper[5118]: I0121 00:21:10.832725 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931axktn7"] Jan 21 00:21:10 crc kubenswrapper[5118]: I0121 00:21:10.838415 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931axktn7" Jan 21 00:21:10 crc kubenswrapper[5118]: I0121 00:21:10.858390 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931axktn7"] Jan 21 00:21:10 crc kubenswrapper[5118]: I0121 00:21:10.953363 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxrlf\" (UniqueName: \"kubernetes.io/projected/403f2683-0efe-4220-b481-fd8ec6a89da0-kube-api-access-kxrlf\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931axktn7\" (UID: \"403f2683-0efe-4220-b481-fd8ec6a89da0\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931axktn7" Jan 21 00:21:10 crc kubenswrapper[5118]: I0121 00:21:10.953427 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/403f2683-0efe-4220-b481-fd8ec6a89da0-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931axktn7\" (UID: \"403f2683-0efe-4220-b481-fd8ec6a89da0\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931axktn7" Jan 21 00:21:10 crc kubenswrapper[5118]: I0121 00:21:10.953578 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/403f2683-0efe-4220-b481-fd8ec6a89da0-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931axktn7\" (UID: \"403f2683-0efe-4220-b481-fd8ec6a89da0\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931axktn7" Jan 21 00:21:11 crc kubenswrapper[5118]: I0121 00:21:11.054854 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/403f2683-0efe-4220-b481-fd8ec6a89da0-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931axktn7\" (UID: \"403f2683-0efe-4220-b481-fd8ec6a89da0\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931axktn7" Jan 21 00:21:11 crc kubenswrapper[5118]: I0121 00:21:11.055058 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/403f2683-0efe-4220-b481-fd8ec6a89da0-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931axktn7\" (UID: \"403f2683-0efe-4220-b481-fd8ec6a89da0\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931axktn7" Jan 21 00:21:11 crc kubenswrapper[5118]: I0121 00:21:11.055202 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kxrlf\" (UniqueName: \"kubernetes.io/projected/403f2683-0efe-4220-b481-fd8ec6a89da0-kube-api-access-kxrlf\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931axktn7\" (UID: \"403f2683-0efe-4220-b481-fd8ec6a89da0\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931axktn7" Jan 21 00:21:11 crc kubenswrapper[5118]: I0121 00:21:11.055625 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/403f2683-0efe-4220-b481-fd8ec6a89da0-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931axktn7\" (UID: \"403f2683-0efe-4220-b481-fd8ec6a89da0\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931axktn7" Jan 21 00:21:11 crc kubenswrapper[5118]: I0121 00:21:11.055776 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/403f2683-0efe-4220-b481-fd8ec6a89da0-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931axktn7\" (UID: \"403f2683-0efe-4220-b481-fd8ec6a89da0\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931axktn7" Jan 21 00:21:11 crc kubenswrapper[5118]: I0121 00:21:11.064845 5118 generic.go:358] "Generic (PLEG): container finished" podID="1d7805d7-a48a-488e-9f51-715cd1e444bf" containerID="1840386e8b841e36c15294935e49b7c7cf4a01db5d72645d4329cdbd0781fe1d" exitCode=0 Jan 21 00:21:11 crc kubenswrapper[5118]: I0121 00:21:11.064904 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e698gt" event={"ID":"1d7805d7-a48a-488e-9f51-715cd1e444bf","Type":"ContainerDied","Data":"1840386e8b841e36c15294935e49b7c7cf4a01db5d72645d4329cdbd0781fe1d"} Jan 21 00:21:11 crc kubenswrapper[5118]: I0121 00:21:11.064960 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e698gt" event={"ID":"1d7805d7-a48a-488e-9f51-715cd1e444bf","Type":"ContainerStarted","Data":"b2782c39ee6a70be073943a3b4db87abf984a952d30d4669c96c47083eef69a6"} Jan 21 00:21:11 crc kubenswrapper[5118]: I0121 00:21:11.081962 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxrlf\" (UniqueName: \"kubernetes.io/projected/403f2683-0efe-4220-b481-fd8ec6a89da0-kube-api-access-kxrlf\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931axktn7\" (UID: \"403f2683-0efe-4220-b481-fd8ec6a89da0\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931axktn7" Jan 21 00:21:11 crc kubenswrapper[5118]: I0121 00:21:11.175662 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931axktn7" Jan 21 00:21:11 crc kubenswrapper[5118]: I0121 00:21:11.359626 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931axktn7"] Jan 21 00:21:11 crc kubenswrapper[5118]: W0121 00:21:11.362316 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod403f2683_0efe_4220_b481_fd8ec6a89da0.slice/crio-d603bef2856f89655b1f43704a918e52807ea624cb1a86846b97abadcf3b019b WatchSource:0}: Error finding container d603bef2856f89655b1f43704a918e52807ea624cb1a86846b97abadcf3b019b: Status 404 returned error can't find the container with id d603bef2856f89655b1f43704a918e52807ea624cb1a86846b97abadcf3b019b Jan 21 00:21:12 crc kubenswrapper[5118]: I0121 00:21:12.073723 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931axktn7" event={"ID":"403f2683-0efe-4220-b481-fd8ec6a89da0","Type":"ContainerStarted","Data":"d603bef2856f89655b1f43704a918e52807ea624cb1a86846b97abadcf3b019b"} Jan 21 00:21:13 crc kubenswrapper[5118]: I0121 00:21:13.084284 5118 generic.go:358] "Generic (PLEG): container finished" podID="1d7805d7-a48a-488e-9f51-715cd1e444bf" containerID="2f6f6d1671960529fbb63ea9cced4be5e9ef80776d247fa50316ef7aa6ac5224" exitCode=0 Jan 21 00:21:13 crc kubenswrapper[5118]: I0121 00:21:13.084545 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e698gt" event={"ID":"1d7805d7-a48a-488e-9f51-715cd1e444bf","Type":"ContainerDied","Data":"2f6f6d1671960529fbb63ea9cced4be5e9ef80776d247fa50316ef7aa6ac5224"} Jan 21 00:21:13 crc kubenswrapper[5118]: I0121 00:21:13.086867 5118 generic.go:358] "Generic (PLEG): container finished" podID="403f2683-0efe-4220-b481-fd8ec6a89da0" containerID="113ebefed5b8f5c4f19f79ba4a3bbd13f9677e96cd6300ccdda4771e23c77855" exitCode=0 Jan 21 00:21:13 crc kubenswrapper[5118]: I0121 00:21:13.087005 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931axktn7" event={"ID":"403f2683-0efe-4220-b481-fd8ec6a89da0","Type":"ContainerDied","Data":"113ebefed5b8f5c4f19f79ba4a3bbd13f9677e96cd6300ccdda4771e23c77855"} Jan 21 00:21:14 crc kubenswrapper[5118]: I0121 00:21:14.096118 5118 generic.go:358] "Generic (PLEG): container finished" podID="1d7805d7-a48a-488e-9f51-715cd1e444bf" containerID="d6c3e7fbac67776d6fa12c060fa2a7b9760b09c08d3206c143f0cedaaba8f903" exitCode=0 Jan 21 00:21:14 crc kubenswrapper[5118]: I0121 00:21:14.096278 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e698gt" event={"ID":"1d7805d7-a48a-488e-9f51-715cd1e444bf","Type":"ContainerDied","Data":"d6c3e7fbac67776d6fa12c060fa2a7b9760b09c08d3206c143f0cedaaba8f903"} Jan 21 00:21:17 crc kubenswrapper[5118]: I0121 00:21:17.162905 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e698gt" Jan 21 00:21:17 crc kubenswrapper[5118]: I0121 00:21:17.312809 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1d7805d7-a48a-488e-9f51-715cd1e444bf-bundle\") pod \"1d7805d7-a48a-488e-9f51-715cd1e444bf\" (UID: \"1d7805d7-a48a-488e-9f51-715cd1e444bf\") " Jan 21 00:21:17 crc kubenswrapper[5118]: I0121 00:21:17.312873 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1d7805d7-a48a-488e-9f51-715cd1e444bf-util\") pod \"1d7805d7-a48a-488e-9f51-715cd1e444bf\" (UID: \"1d7805d7-a48a-488e-9f51-715cd1e444bf\") " Jan 21 00:21:17 crc kubenswrapper[5118]: I0121 00:21:17.312895 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d66m4\" (UniqueName: \"kubernetes.io/projected/1d7805d7-a48a-488e-9f51-715cd1e444bf-kube-api-access-d66m4\") pod \"1d7805d7-a48a-488e-9f51-715cd1e444bf\" (UID: \"1d7805d7-a48a-488e-9f51-715cd1e444bf\") " Jan 21 00:21:17 crc kubenswrapper[5118]: I0121 00:21:17.313607 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d7805d7-a48a-488e-9f51-715cd1e444bf-bundle" (OuterVolumeSpecName: "bundle") pod "1d7805d7-a48a-488e-9f51-715cd1e444bf" (UID: "1d7805d7-a48a-488e-9f51-715cd1e444bf"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 00:21:17 crc kubenswrapper[5118]: I0121 00:21:17.333314 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d7805d7-a48a-488e-9f51-715cd1e444bf-kube-api-access-d66m4" (OuterVolumeSpecName: "kube-api-access-d66m4") pod "1d7805d7-a48a-488e-9f51-715cd1e444bf" (UID: "1d7805d7-a48a-488e-9f51-715cd1e444bf"). InnerVolumeSpecName "kube-api-access-d66m4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:21:17 crc kubenswrapper[5118]: I0121 00:21:17.345339 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e698gt" event={"ID":"1d7805d7-a48a-488e-9f51-715cd1e444bf","Type":"ContainerDied","Data":"b2782c39ee6a70be073943a3b4db87abf984a952d30d4669c96c47083eef69a6"} Jan 21 00:21:17 crc kubenswrapper[5118]: I0121 00:21:17.345410 5118 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b2782c39ee6a70be073943a3b4db87abf984a952d30d4669c96c47083eef69a6" Jan 21 00:21:17 crc kubenswrapper[5118]: I0121 00:21:17.345368 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e698gt" Jan 21 00:21:17 crc kubenswrapper[5118]: I0121 00:21:17.460884 5118 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1d7805d7-a48a-488e-9f51-715cd1e444bf-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 00:21:17 crc kubenswrapper[5118]: I0121 00:21:17.461379 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d66m4\" (UniqueName: \"kubernetes.io/projected/1d7805d7-a48a-488e-9f51-715cd1e444bf-kube-api-access-d66m4\") on node \"crc\" DevicePath \"\"" Jan 21 00:21:17 crc kubenswrapper[5118]: I0121 00:21:17.740338 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d7805d7-a48a-488e-9f51-715cd1e444bf-util" (OuterVolumeSpecName: "util") pod "1d7805d7-a48a-488e-9f51-715cd1e444bf" (UID: "1d7805d7-a48a-488e-9f51-715cd1e444bf"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 00:21:17 crc kubenswrapper[5118]: I0121 00:21:17.764331 5118 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1d7805d7-a48a-488e-9f51-715cd1e444bf-util\") on node \"crc\" DevicePath \"\"" Jan 21 00:21:22 crc kubenswrapper[5118]: I0121 00:21:22.406522 5118 generic.go:358] "Generic (PLEG): container finished" podID="403f2683-0efe-4220-b481-fd8ec6a89da0" containerID="8af19cfd2b8083887a3dd31206227630c18b4b4ba4e380c9632d2086d8b576c9" exitCode=0 Jan 21 00:21:22 crc kubenswrapper[5118]: I0121 00:21:22.406587 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931axktn7" event={"ID":"403f2683-0efe-4220-b481-fd8ec6a89da0","Type":"ContainerDied","Data":"8af19cfd2b8083887a3dd31206227630c18b4b4ba4e380c9632d2086d8b576c9"} Jan 21 00:21:22 crc kubenswrapper[5118]: I0121 00:21:22.510865 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-9bc85b4bf-6mrsj"] Jan 21 00:21:22 crc kubenswrapper[5118]: I0121 00:21:22.511483 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1d7805d7-a48a-488e-9f51-715cd1e444bf" containerName="util" Jan 21 00:21:22 crc kubenswrapper[5118]: I0121 00:21:22.511500 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d7805d7-a48a-488e-9f51-715cd1e444bf" containerName="util" Jan 21 00:21:22 crc kubenswrapper[5118]: I0121 00:21:22.511540 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1d7805d7-a48a-488e-9f51-715cd1e444bf" containerName="pull" Jan 21 00:21:22 crc kubenswrapper[5118]: I0121 00:21:22.511546 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d7805d7-a48a-488e-9f51-715cd1e444bf" containerName="pull" Jan 21 00:21:22 crc kubenswrapper[5118]: I0121 00:21:22.511555 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1d7805d7-a48a-488e-9f51-715cd1e444bf" containerName="extract" Jan 21 00:21:22 crc kubenswrapper[5118]: I0121 00:21:22.511560 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d7805d7-a48a-488e-9f51-715cd1e444bf" containerName="extract" Jan 21 00:21:22 crc kubenswrapper[5118]: I0121 00:21:22.511658 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="1d7805d7-a48a-488e-9f51-715cd1e444bf" containerName="extract" Jan 21 00:21:22 crc kubenswrapper[5118]: I0121 00:21:22.517376 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-6mrsj" Jan 21 00:21:22 crc kubenswrapper[5118]: I0121 00:21:22.519757 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operators\"/\"openshift-service-ca.crt\"" Jan 21 00:21:22 crc kubenswrapper[5118]: I0121 00:21:22.520547 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operators\"/\"kube-root-ca.crt\"" Jan 21 00:21:22 crc kubenswrapper[5118]: I0121 00:21:22.522678 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-dockercfg-xczjx\"" Jan 21 00:21:22 crc kubenswrapper[5118]: I0121 00:21:22.532326 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-9bc85b4bf-6mrsj"] Jan 21 00:21:22 crc kubenswrapper[5118]: I0121 00:21:22.623534 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-959mg\" (UniqueName: \"kubernetes.io/projected/3b71239b-0442-4a3d-9df1-d0c8727f356b-kube-api-access-959mg\") pod \"obo-prometheus-operator-9bc85b4bf-6mrsj\" (UID: \"3b71239b-0442-4a3d-9df1-d0c8727f356b\") " pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-6mrsj" Jan 21 00:21:22 crc kubenswrapper[5118]: I0121 00:21:22.658204 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-77696d8df9-rn2lf"] Jan 21 00:21:22 crc kubenswrapper[5118]: I0121 00:21:22.662554 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-77696d8df9-rn2lf" Jan 21 00:21:22 crc kubenswrapper[5118]: I0121 00:21:22.667977 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-dockercfg-rr2px\"" Jan 21 00:21:22 crc kubenswrapper[5118]: I0121 00:21:22.668984 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-service-cert\"" Jan 21 00:21:22 crc kubenswrapper[5118]: I0121 00:21:22.676905 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-77696d8df9-jdldg"] Jan 21 00:21:22 crc kubenswrapper[5118]: I0121 00:21:22.680915 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-77696d8df9-jdldg" Jan 21 00:21:22 crc kubenswrapper[5118]: I0121 00:21:22.682663 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-77696d8df9-rn2lf"] Jan 21 00:21:22 crc kubenswrapper[5118]: I0121 00:21:22.687754 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-77696d8df9-jdldg"] Jan 21 00:21:22 crc kubenswrapper[5118]: I0121 00:21:22.724895 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c8f5c4be-4fd8-4dde-b8da-13d367a1ca0a-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-77696d8df9-jdldg\" (UID: \"c8f5c4be-4fd8-4dde-b8da-13d367a1ca0a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-77696d8df9-jdldg" Jan 21 00:21:22 crc kubenswrapper[5118]: I0121 00:21:22.724982 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c8f5c4be-4fd8-4dde-b8da-13d367a1ca0a-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-77696d8df9-jdldg\" (UID: \"c8f5c4be-4fd8-4dde-b8da-13d367a1ca0a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-77696d8df9-jdldg" Jan 21 00:21:22 crc kubenswrapper[5118]: I0121 00:21:22.725034 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/66602a58-82f5-428b-8473-2f3e878d94e5-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-77696d8df9-rn2lf\" (UID: \"66602a58-82f5-428b-8473-2f3e878d94e5\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-77696d8df9-rn2lf" Jan 21 00:21:22 crc kubenswrapper[5118]: I0121 00:21:22.725057 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-959mg\" (UniqueName: \"kubernetes.io/projected/3b71239b-0442-4a3d-9df1-d0c8727f356b-kube-api-access-959mg\") pod \"obo-prometheus-operator-9bc85b4bf-6mrsj\" (UID: \"3b71239b-0442-4a3d-9df1-d0c8727f356b\") " pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-6mrsj" Jan 21 00:21:22 crc kubenswrapper[5118]: I0121 00:21:22.725289 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/66602a58-82f5-428b-8473-2f3e878d94e5-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-77696d8df9-rn2lf\" (UID: \"66602a58-82f5-428b-8473-2f3e878d94e5\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-77696d8df9-rn2lf" Jan 21 00:21:22 crc kubenswrapper[5118]: I0121 00:21:22.748918 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-959mg\" (UniqueName: \"kubernetes.io/projected/3b71239b-0442-4a3d-9df1-d0c8727f356b-kube-api-access-959mg\") pod \"obo-prometheus-operator-9bc85b4bf-6mrsj\" (UID: \"3b71239b-0442-4a3d-9df1-d0c8727f356b\") " pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-6mrsj" Jan 21 00:21:22 crc kubenswrapper[5118]: I0121 00:21:22.826504 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c8f5c4be-4fd8-4dde-b8da-13d367a1ca0a-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-77696d8df9-jdldg\" (UID: \"c8f5c4be-4fd8-4dde-b8da-13d367a1ca0a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-77696d8df9-jdldg" Jan 21 00:21:22 crc kubenswrapper[5118]: I0121 00:21:22.826609 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c8f5c4be-4fd8-4dde-b8da-13d367a1ca0a-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-77696d8df9-jdldg\" (UID: \"c8f5c4be-4fd8-4dde-b8da-13d367a1ca0a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-77696d8df9-jdldg" Jan 21 00:21:22 crc kubenswrapper[5118]: I0121 00:21:22.826652 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/66602a58-82f5-428b-8473-2f3e878d94e5-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-77696d8df9-rn2lf\" (UID: \"66602a58-82f5-428b-8473-2f3e878d94e5\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-77696d8df9-rn2lf" Jan 21 00:21:22 crc kubenswrapper[5118]: I0121 00:21:22.826697 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/66602a58-82f5-428b-8473-2f3e878d94e5-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-77696d8df9-rn2lf\" (UID: \"66602a58-82f5-428b-8473-2f3e878d94e5\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-77696d8df9-rn2lf" Jan 21 00:21:22 crc kubenswrapper[5118]: I0121 00:21:22.830436 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c8f5c4be-4fd8-4dde-b8da-13d367a1ca0a-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-77696d8df9-jdldg\" (UID: \"c8f5c4be-4fd8-4dde-b8da-13d367a1ca0a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-77696d8df9-jdldg" Jan 21 00:21:22 crc kubenswrapper[5118]: I0121 00:21:22.830568 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c8f5c4be-4fd8-4dde-b8da-13d367a1ca0a-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-77696d8df9-jdldg\" (UID: \"c8f5c4be-4fd8-4dde-b8da-13d367a1ca0a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-77696d8df9-jdldg" Jan 21 00:21:22 crc kubenswrapper[5118]: I0121 00:21:22.830789 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/66602a58-82f5-428b-8473-2f3e878d94e5-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-77696d8df9-rn2lf\" (UID: \"66602a58-82f5-428b-8473-2f3e878d94e5\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-77696d8df9-rn2lf" Jan 21 00:21:22 crc kubenswrapper[5118]: I0121 00:21:22.836364 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/66602a58-82f5-428b-8473-2f3e878d94e5-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-77696d8df9-rn2lf\" (UID: \"66602a58-82f5-428b-8473-2f3e878d94e5\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-77696d8df9-rn2lf" Jan 21 00:21:22 crc kubenswrapper[5118]: I0121 00:21:22.838194 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-6mrsj" Jan 21 00:21:22 crc kubenswrapper[5118]: I0121 00:21:22.982761 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-77696d8df9-rn2lf" Jan 21 00:21:22 crc kubenswrapper[5118]: I0121 00:21:22.997904 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-77696d8df9-jdldg" Jan 21 00:21:23 crc kubenswrapper[5118]: I0121 00:21:23.113908 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-85c68dddb-f76dw"] Jan 21 00:21:23 crc kubenswrapper[5118]: I0121 00:21:23.123042 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-85c68dddb-f76dw" Jan 21 00:21:23 crc kubenswrapper[5118]: I0121 00:21:23.125553 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"observability-operator-tls\"" Jan 21 00:21:23 crc kubenswrapper[5118]: I0121 00:21:23.125896 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"observability-operator-sa-dockercfg-b62km\"" Jan 21 00:21:23 crc kubenswrapper[5118]: I0121 00:21:23.130544 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/a82d3afe-1c85-447e-8430-14b7b3aa4780-observability-operator-tls\") pod \"observability-operator-85c68dddb-f76dw\" (UID: \"a82d3afe-1c85-447e-8430-14b7b3aa4780\") " pod="openshift-operators/observability-operator-85c68dddb-f76dw" Jan 21 00:21:23 crc kubenswrapper[5118]: I0121 00:21:23.130643 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lc6x\" (UniqueName: \"kubernetes.io/projected/a82d3afe-1c85-447e-8430-14b7b3aa4780-kube-api-access-6lc6x\") pod \"observability-operator-85c68dddb-f76dw\" (UID: \"a82d3afe-1c85-447e-8430-14b7b3aa4780\") " pod="openshift-operators/observability-operator-85c68dddb-f76dw" Jan 21 00:21:23 crc kubenswrapper[5118]: I0121 00:21:23.145679 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-85c68dddb-f76dw"] Jan 21 00:21:23 crc kubenswrapper[5118]: I0121 00:21:23.232999 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6lc6x\" (UniqueName: \"kubernetes.io/projected/a82d3afe-1c85-447e-8430-14b7b3aa4780-kube-api-access-6lc6x\") pod \"observability-operator-85c68dddb-f76dw\" (UID: \"a82d3afe-1c85-447e-8430-14b7b3aa4780\") " pod="openshift-operators/observability-operator-85c68dddb-f76dw" Jan 21 00:21:23 crc kubenswrapper[5118]: I0121 00:21:23.233072 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/a82d3afe-1c85-447e-8430-14b7b3aa4780-observability-operator-tls\") pod \"observability-operator-85c68dddb-f76dw\" (UID: \"a82d3afe-1c85-447e-8430-14b7b3aa4780\") " pod="openshift-operators/observability-operator-85c68dddb-f76dw" Jan 21 00:21:23 crc kubenswrapper[5118]: I0121 00:21:23.247116 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-669c9f96b5-lcwrz"] Jan 21 00:21:23 crc kubenswrapper[5118]: I0121 00:21:23.250680 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-669c9f96b5-lcwrz" Jan 21 00:21:23 crc kubenswrapper[5118]: I0121 00:21:23.256392 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"perses-operator-dockercfg-s4db2\"" Jan 21 00:21:23 crc kubenswrapper[5118]: I0121 00:21:23.269562 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lc6x\" (UniqueName: \"kubernetes.io/projected/a82d3afe-1c85-447e-8430-14b7b3aa4780-kube-api-access-6lc6x\") pod \"observability-operator-85c68dddb-f76dw\" (UID: \"a82d3afe-1c85-447e-8430-14b7b3aa4780\") " pod="openshift-operators/observability-operator-85c68dddb-f76dw" Jan 21 00:21:23 crc kubenswrapper[5118]: I0121 00:21:23.270765 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/a82d3afe-1c85-447e-8430-14b7b3aa4780-observability-operator-tls\") pod \"observability-operator-85c68dddb-f76dw\" (UID: \"a82d3afe-1c85-447e-8430-14b7b3aa4780\") " pod="openshift-operators/observability-operator-85c68dddb-f76dw" Jan 21 00:21:23 crc kubenswrapper[5118]: I0121 00:21:23.271773 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-669c9f96b5-lcwrz"] Jan 21 00:21:23 crc kubenswrapper[5118]: I0121 00:21:23.334304 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5zbf\" (UniqueName: \"kubernetes.io/projected/a9b6f709-1ac0-463b-be90-11b3065eb4d9-kube-api-access-k5zbf\") pod \"perses-operator-669c9f96b5-lcwrz\" (UID: \"a9b6f709-1ac0-463b-be90-11b3065eb4d9\") " pod="openshift-operators/perses-operator-669c9f96b5-lcwrz" Jan 21 00:21:23 crc kubenswrapper[5118]: I0121 00:21:23.334366 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/a9b6f709-1ac0-463b-be90-11b3065eb4d9-openshift-service-ca\") pod \"perses-operator-669c9f96b5-lcwrz\" (UID: \"a9b6f709-1ac0-463b-be90-11b3065eb4d9\") " pod="openshift-operators/perses-operator-669c9f96b5-lcwrz" Jan 21 00:21:23 crc kubenswrapper[5118]: I0121 00:21:23.413023 5118 generic.go:358] "Generic (PLEG): container finished" podID="403f2683-0efe-4220-b481-fd8ec6a89da0" containerID="61cad599d777990207461e09eb4de619e2e28e28414875de4db5cf078a6fde9b" exitCode=0 Jan 21 00:21:23 crc kubenswrapper[5118]: I0121 00:21:23.413063 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931axktn7" event={"ID":"403f2683-0efe-4220-b481-fd8ec6a89da0","Type":"ContainerDied","Data":"61cad599d777990207461e09eb4de619e2e28e28414875de4db5cf078a6fde9b"} Jan 21 00:21:23 crc kubenswrapper[5118]: I0121 00:21:23.437897 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-k5zbf\" (UniqueName: \"kubernetes.io/projected/a9b6f709-1ac0-463b-be90-11b3065eb4d9-kube-api-access-k5zbf\") pod \"perses-operator-669c9f96b5-lcwrz\" (UID: \"a9b6f709-1ac0-463b-be90-11b3065eb4d9\") " pod="openshift-operators/perses-operator-669c9f96b5-lcwrz" Jan 21 00:21:23 crc kubenswrapper[5118]: I0121 00:21:23.437967 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/a9b6f709-1ac0-463b-be90-11b3065eb4d9-openshift-service-ca\") pod \"perses-operator-669c9f96b5-lcwrz\" (UID: \"a9b6f709-1ac0-463b-be90-11b3065eb4d9\") " pod="openshift-operators/perses-operator-669c9f96b5-lcwrz" Jan 21 00:21:23 crc kubenswrapper[5118]: I0121 00:21:23.439174 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/a9b6f709-1ac0-463b-be90-11b3065eb4d9-openshift-service-ca\") pod \"perses-operator-669c9f96b5-lcwrz\" (UID: \"a9b6f709-1ac0-463b-be90-11b3065eb4d9\") " pod="openshift-operators/perses-operator-669c9f96b5-lcwrz" Jan 21 00:21:23 crc kubenswrapper[5118]: I0121 00:21:23.456565 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-77696d8df9-jdldg"] Jan 21 00:21:23 crc kubenswrapper[5118]: I0121 00:21:23.462054 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5zbf\" (UniqueName: \"kubernetes.io/projected/a9b6f709-1ac0-463b-be90-11b3065eb4d9-kube-api-access-k5zbf\") pod \"perses-operator-669c9f96b5-lcwrz\" (UID: \"a9b6f709-1ac0-463b-be90-11b3065eb4d9\") " pod="openshift-operators/perses-operator-669c9f96b5-lcwrz" Jan 21 00:21:23 crc kubenswrapper[5118]: W0121 00:21:23.466416 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc8f5c4be_4fd8_4dde_b8da_13d367a1ca0a.slice/crio-6115ab034f0dba7345487ea2eb2a61a2d3f482a1daefa544bec63772ce945f9c WatchSource:0}: Error finding container 6115ab034f0dba7345487ea2eb2a61a2d3f482a1daefa544bec63772ce945f9c: Status 404 returned error can't find the container with id 6115ab034f0dba7345487ea2eb2a61a2d3f482a1daefa544bec63772ce945f9c Jan 21 00:21:23 crc kubenswrapper[5118]: I0121 00:21:23.480425 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-85c68dddb-f76dw" Jan 21 00:21:23 crc kubenswrapper[5118]: I0121 00:21:23.530374 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-9bc85b4bf-6mrsj"] Jan 21 00:21:23 crc kubenswrapper[5118]: I0121 00:21:23.594948 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-77696d8df9-rn2lf"] Jan 21 00:21:23 crc kubenswrapper[5118]: I0121 00:21:23.609093 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-669c9f96b5-lcwrz" Jan 21 00:21:23 crc kubenswrapper[5118]: I0121 00:21:23.995048 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-85c68dddb-f76dw"] Jan 21 00:21:24 crc kubenswrapper[5118]: W0121 00:21:24.003398 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda82d3afe_1c85_447e_8430_14b7b3aa4780.slice/crio-869ad4b700db60c8c6c71d3cae0f1e256b3d37561f32fa8ab78540695731b62f WatchSource:0}: Error finding container 869ad4b700db60c8c6c71d3cae0f1e256b3d37561f32fa8ab78540695731b62f: Status 404 returned error can't find the container with id 869ad4b700db60c8c6c71d3cae0f1e256b3d37561f32fa8ab78540695731b62f Jan 21 00:21:24 crc kubenswrapper[5118]: I0121 00:21:24.157000 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-669c9f96b5-lcwrz"] Jan 21 00:21:24 crc kubenswrapper[5118]: W0121 00:21:24.186735 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda9b6f709_1ac0_463b_be90_11b3065eb4d9.slice/crio-87d32392a1aea66ebbaadb655dd1fb14f1f883ce8606532c44829983427ac929 WatchSource:0}: Error finding container 87d32392a1aea66ebbaadb655dd1fb14f1f883ce8606532c44829983427ac929: Status 404 returned error can't find the container with id 87d32392a1aea66ebbaadb655dd1fb14f1f883ce8606532c44829983427ac929 Jan 21 00:21:24 crc kubenswrapper[5118]: I0121 00:21:24.436364 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-669c9f96b5-lcwrz" event={"ID":"a9b6f709-1ac0-463b-be90-11b3065eb4d9","Type":"ContainerStarted","Data":"87d32392a1aea66ebbaadb655dd1fb14f1f883ce8606532c44829983427ac929"} Jan 21 00:21:24 crc kubenswrapper[5118]: I0121 00:21:24.448998 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-85c68dddb-f76dw" event={"ID":"a82d3afe-1c85-447e-8430-14b7b3aa4780","Type":"ContainerStarted","Data":"869ad4b700db60c8c6c71d3cae0f1e256b3d37561f32fa8ab78540695731b62f"} Jan 21 00:21:24 crc kubenswrapper[5118]: I0121 00:21:24.450606 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-77696d8df9-rn2lf" event={"ID":"66602a58-82f5-428b-8473-2f3e878d94e5","Type":"ContainerStarted","Data":"29e5d0e1d20dd8e04b0e7c78e3da28bdba3d931532dfe044092d24bc1988203c"} Jan 21 00:21:24 crc kubenswrapper[5118]: I0121 00:21:24.452961 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-77696d8df9-jdldg" event={"ID":"c8f5c4be-4fd8-4dde-b8da-13d367a1ca0a","Type":"ContainerStarted","Data":"6115ab034f0dba7345487ea2eb2a61a2d3f482a1daefa544bec63772ce945f9c"} Jan 21 00:21:24 crc kubenswrapper[5118]: I0121 00:21:24.454550 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-6mrsj" event={"ID":"3b71239b-0442-4a3d-9df1-d0c8727f356b","Type":"ContainerStarted","Data":"736cec4baf8bee0d89ac6c432a60a7ccde75a6719c15f43f466c2b495198ea5f"} Jan 21 00:21:24 crc kubenswrapper[5118]: I0121 00:21:24.927237 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931axktn7" Jan 21 00:21:24 crc kubenswrapper[5118]: I0121 00:21:24.963738 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/403f2683-0efe-4220-b481-fd8ec6a89da0-util\") pod \"403f2683-0efe-4220-b481-fd8ec6a89da0\" (UID: \"403f2683-0efe-4220-b481-fd8ec6a89da0\") " Jan 21 00:21:24 crc kubenswrapper[5118]: I0121 00:21:24.963864 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kxrlf\" (UniqueName: \"kubernetes.io/projected/403f2683-0efe-4220-b481-fd8ec6a89da0-kube-api-access-kxrlf\") pod \"403f2683-0efe-4220-b481-fd8ec6a89da0\" (UID: \"403f2683-0efe-4220-b481-fd8ec6a89da0\") " Jan 21 00:21:24 crc kubenswrapper[5118]: I0121 00:21:24.963881 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/403f2683-0efe-4220-b481-fd8ec6a89da0-bundle\") pod \"403f2683-0efe-4220-b481-fd8ec6a89da0\" (UID: \"403f2683-0efe-4220-b481-fd8ec6a89da0\") " Jan 21 00:21:24 crc kubenswrapper[5118]: I0121 00:21:24.965822 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/403f2683-0efe-4220-b481-fd8ec6a89da0-bundle" (OuterVolumeSpecName: "bundle") pod "403f2683-0efe-4220-b481-fd8ec6a89da0" (UID: "403f2683-0efe-4220-b481-fd8ec6a89da0"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 00:21:24 crc kubenswrapper[5118]: I0121 00:21:24.973633 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/403f2683-0efe-4220-b481-fd8ec6a89da0-util" (OuterVolumeSpecName: "util") pod "403f2683-0efe-4220-b481-fd8ec6a89da0" (UID: "403f2683-0efe-4220-b481-fd8ec6a89da0"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 00:21:25 crc kubenswrapper[5118]: I0121 00:21:25.013033 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/403f2683-0efe-4220-b481-fd8ec6a89da0-kube-api-access-kxrlf" (OuterVolumeSpecName: "kube-api-access-kxrlf") pod "403f2683-0efe-4220-b481-fd8ec6a89da0" (UID: "403f2683-0efe-4220-b481-fd8ec6a89da0"). InnerVolumeSpecName "kube-api-access-kxrlf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:21:25 crc kubenswrapper[5118]: I0121 00:21:25.065207 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kxrlf\" (UniqueName: \"kubernetes.io/projected/403f2683-0efe-4220-b481-fd8ec6a89da0-kube-api-access-kxrlf\") on node \"crc\" DevicePath \"\"" Jan 21 00:21:25 crc kubenswrapper[5118]: I0121 00:21:25.065257 5118 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/403f2683-0efe-4220-b481-fd8ec6a89da0-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 00:21:25 crc kubenswrapper[5118]: I0121 00:21:25.065269 5118 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/403f2683-0efe-4220-b481-fd8ec6a89da0-util\") on node \"crc\" DevicePath \"\"" Jan 21 00:21:25 crc kubenswrapper[5118]: I0121 00:21:25.487262 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931axktn7" event={"ID":"403f2683-0efe-4220-b481-fd8ec6a89da0","Type":"ContainerDied","Data":"d603bef2856f89655b1f43704a918e52807ea624cb1a86846b97abadcf3b019b"} Jan 21 00:21:25 crc kubenswrapper[5118]: I0121 00:21:25.487337 5118 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d603bef2856f89655b1f43704a918e52807ea624cb1a86846b97abadcf3b019b" Jan 21 00:21:25 crc kubenswrapper[5118]: I0121 00:21:25.487459 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931axktn7" Jan 21 00:21:25 crc kubenswrapper[5118]: I0121 00:21:25.996659 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-nf6rb"] Jan 21 00:21:25 crc kubenswrapper[5118]: I0121 00:21:25.997258 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="403f2683-0efe-4220-b481-fd8ec6a89da0" containerName="util" Jan 21 00:21:25 crc kubenswrapper[5118]: I0121 00:21:25.997275 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="403f2683-0efe-4220-b481-fd8ec6a89da0" containerName="util" Jan 21 00:21:25 crc kubenswrapper[5118]: I0121 00:21:25.997289 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="403f2683-0efe-4220-b481-fd8ec6a89da0" containerName="extract" Jan 21 00:21:25 crc kubenswrapper[5118]: I0121 00:21:25.997296 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="403f2683-0efe-4220-b481-fd8ec6a89da0" containerName="extract" Jan 21 00:21:25 crc kubenswrapper[5118]: I0121 00:21:25.997315 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="403f2683-0efe-4220-b481-fd8ec6a89da0" containerName="pull" Jan 21 00:21:25 crc kubenswrapper[5118]: I0121 00:21:25.997321 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="403f2683-0efe-4220-b481-fd8ec6a89da0" containerName="pull" Jan 21 00:21:25 crc kubenswrapper[5118]: I0121 00:21:25.997413 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="403f2683-0efe-4220-b481-fd8ec6a89da0" containerName="extract" Jan 21 00:21:26 crc kubenswrapper[5118]: I0121 00:21:26.304508 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nf6rb"] Jan 21 00:21:26 crc kubenswrapper[5118]: I0121 00:21:26.304660 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nf6rb" Jan 21 00:21:26 crc kubenswrapper[5118]: I0121 00:21:26.408982 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/elastic-operator-85b59756dc-hxvxk"] Jan 21 00:21:26 crc kubenswrapper[5118]: I0121 00:21:26.463454 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-85b59756dc-hxvxk"] Jan 21 00:21:26 crc kubenswrapper[5118]: I0121 00:21:26.463624 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-85b59756dc-hxvxk" Jan 21 00:21:26 crc kubenswrapper[5118]: I0121 00:21:26.480766 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elastic-operator-service-cert\"" Jan 21 00:21:26 crc kubenswrapper[5118]: I0121 00:21:26.480782 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"kube-root-ca.crt\"" Jan 21 00:21:26 crc kubenswrapper[5118]: I0121 00:21:26.480911 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"openshift-service-ca.crt\"" Jan 21 00:21:26 crc kubenswrapper[5118]: I0121 00:21:26.480989 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elastic-operator-dockercfg-hpnfn\"" Jan 21 00:21:26 crc kubenswrapper[5118]: I0121 00:21:26.486900 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktrsr\" (UniqueName: \"kubernetes.io/projected/47c53b5d-9d7c-44d1-a742-91d37beede92-kube-api-access-ktrsr\") pod \"redhat-operators-nf6rb\" (UID: \"47c53b5d-9d7c-44d1-a742-91d37beede92\") " pod="openshift-marketplace/redhat-operators-nf6rb" Jan 21 00:21:26 crc kubenswrapper[5118]: I0121 00:21:26.486947 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47c53b5d-9d7c-44d1-a742-91d37beede92-catalog-content\") pod \"redhat-operators-nf6rb\" (UID: \"47c53b5d-9d7c-44d1-a742-91d37beede92\") " pod="openshift-marketplace/redhat-operators-nf6rb" Jan 21 00:21:26 crc kubenswrapper[5118]: I0121 00:21:26.487062 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47c53b5d-9d7c-44d1-a742-91d37beede92-utilities\") pod \"redhat-operators-nf6rb\" (UID: \"47c53b5d-9d7c-44d1-a742-91d37beede92\") " pod="openshift-marketplace/redhat-operators-nf6rb" Jan 21 00:21:26 crc kubenswrapper[5118]: I0121 00:21:26.588343 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nl8vl\" (UniqueName: \"kubernetes.io/projected/cd6cbfc5-cdc4-4142-956b-c2f60a030179-kube-api-access-nl8vl\") pod \"elastic-operator-85b59756dc-hxvxk\" (UID: \"cd6cbfc5-cdc4-4142-956b-c2f60a030179\") " pod="service-telemetry/elastic-operator-85b59756dc-hxvxk" Jan 21 00:21:26 crc kubenswrapper[5118]: I0121 00:21:26.588412 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cd6cbfc5-cdc4-4142-956b-c2f60a030179-webhook-cert\") pod \"elastic-operator-85b59756dc-hxvxk\" (UID: \"cd6cbfc5-cdc4-4142-956b-c2f60a030179\") " pod="service-telemetry/elastic-operator-85b59756dc-hxvxk" Jan 21 00:21:26 crc kubenswrapper[5118]: I0121 00:21:26.588465 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47c53b5d-9d7c-44d1-a742-91d37beede92-utilities\") pod \"redhat-operators-nf6rb\" (UID: \"47c53b5d-9d7c-44d1-a742-91d37beede92\") " pod="openshift-marketplace/redhat-operators-nf6rb" Jan 21 00:21:26 crc kubenswrapper[5118]: I0121 00:21:26.588489 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cd6cbfc5-cdc4-4142-956b-c2f60a030179-apiservice-cert\") pod \"elastic-operator-85b59756dc-hxvxk\" (UID: \"cd6cbfc5-cdc4-4142-956b-c2f60a030179\") " pod="service-telemetry/elastic-operator-85b59756dc-hxvxk" Jan 21 00:21:26 crc kubenswrapper[5118]: I0121 00:21:26.588508 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ktrsr\" (UniqueName: \"kubernetes.io/projected/47c53b5d-9d7c-44d1-a742-91d37beede92-kube-api-access-ktrsr\") pod \"redhat-operators-nf6rb\" (UID: \"47c53b5d-9d7c-44d1-a742-91d37beede92\") " pod="openshift-marketplace/redhat-operators-nf6rb" Jan 21 00:21:26 crc kubenswrapper[5118]: I0121 00:21:26.588528 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47c53b5d-9d7c-44d1-a742-91d37beede92-catalog-content\") pod \"redhat-operators-nf6rb\" (UID: \"47c53b5d-9d7c-44d1-a742-91d37beede92\") " pod="openshift-marketplace/redhat-operators-nf6rb" Jan 21 00:21:26 crc kubenswrapper[5118]: I0121 00:21:26.588947 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47c53b5d-9d7c-44d1-a742-91d37beede92-catalog-content\") pod \"redhat-operators-nf6rb\" (UID: \"47c53b5d-9d7c-44d1-a742-91d37beede92\") " pod="openshift-marketplace/redhat-operators-nf6rb" Jan 21 00:21:26 crc kubenswrapper[5118]: I0121 00:21:26.589209 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47c53b5d-9d7c-44d1-a742-91d37beede92-utilities\") pod \"redhat-operators-nf6rb\" (UID: \"47c53b5d-9d7c-44d1-a742-91d37beede92\") " pod="openshift-marketplace/redhat-operators-nf6rb" Jan 21 00:21:26 crc kubenswrapper[5118]: I0121 00:21:26.617186 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktrsr\" (UniqueName: \"kubernetes.io/projected/47c53b5d-9d7c-44d1-a742-91d37beede92-kube-api-access-ktrsr\") pod \"redhat-operators-nf6rb\" (UID: \"47c53b5d-9d7c-44d1-a742-91d37beede92\") " pod="openshift-marketplace/redhat-operators-nf6rb" Jan 21 00:21:26 crc kubenswrapper[5118]: I0121 00:21:26.628401 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nf6rb" Jan 21 00:21:26 crc kubenswrapper[5118]: I0121 00:21:26.690203 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cd6cbfc5-cdc4-4142-956b-c2f60a030179-webhook-cert\") pod \"elastic-operator-85b59756dc-hxvxk\" (UID: \"cd6cbfc5-cdc4-4142-956b-c2f60a030179\") " pod="service-telemetry/elastic-operator-85b59756dc-hxvxk" Jan 21 00:21:26 crc kubenswrapper[5118]: I0121 00:21:26.690272 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cd6cbfc5-cdc4-4142-956b-c2f60a030179-apiservice-cert\") pod \"elastic-operator-85b59756dc-hxvxk\" (UID: \"cd6cbfc5-cdc4-4142-956b-c2f60a030179\") " pod="service-telemetry/elastic-operator-85b59756dc-hxvxk" Jan 21 00:21:26 crc kubenswrapper[5118]: I0121 00:21:26.690305 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nl8vl\" (UniqueName: \"kubernetes.io/projected/cd6cbfc5-cdc4-4142-956b-c2f60a030179-kube-api-access-nl8vl\") pod \"elastic-operator-85b59756dc-hxvxk\" (UID: \"cd6cbfc5-cdc4-4142-956b-c2f60a030179\") " pod="service-telemetry/elastic-operator-85b59756dc-hxvxk" Jan 21 00:21:26 crc kubenswrapper[5118]: I0121 00:21:26.700307 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cd6cbfc5-cdc4-4142-956b-c2f60a030179-webhook-cert\") pod \"elastic-operator-85b59756dc-hxvxk\" (UID: \"cd6cbfc5-cdc4-4142-956b-c2f60a030179\") " pod="service-telemetry/elastic-operator-85b59756dc-hxvxk" Jan 21 00:21:26 crc kubenswrapper[5118]: I0121 00:21:26.721086 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nl8vl\" (UniqueName: \"kubernetes.io/projected/cd6cbfc5-cdc4-4142-956b-c2f60a030179-kube-api-access-nl8vl\") pod \"elastic-operator-85b59756dc-hxvxk\" (UID: \"cd6cbfc5-cdc4-4142-956b-c2f60a030179\") " pod="service-telemetry/elastic-operator-85b59756dc-hxvxk" Jan 21 00:21:26 crc kubenswrapper[5118]: I0121 00:21:26.725183 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cd6cbfc5-cdc4-4142-956b-c2f60a030179-apiservice-cert\") pod \"elastic-operator-85b59756dc-hxvxk\" (UID: \"cd6cbfc5-cdc4-4142-956b-c2f60a030179\") " pod="service-telemetry/elastic-operator-85b59756dc-hxvxk" Jan 21 00:21:26 crc kubenswrapper[5118]: I0121 00:21:26.820920 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-85b59756dc-hxvxk" Jan 21 00:21:27 crc kubenswrapper[5118]: I0121 00:21:27.344368 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nf6rb"] Jan 21 00:21:27 crc kubenswrapper[5118]: I0121 00:21:27.508622 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nf6rb" event={"ID":"47c53b5d-9d7c-44d1-a742-91d37beede92","Type":"ContainerStarted","Data":"363516e9f4ff80cad5a59c266ea35121f3e0b2184334d753224bcb804d1187e5"} Jan 21 00:21:27 crc kubenswrapper[5118]: I0121 00:21:27.732664 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-85b59756dc-hxvxk"] Jan 21 00:21:28 crc kubenswrapper[5118]: I0121 00:21:28.610590 5118 generic.go:358] "Generic (PLEG): container finished" podID="47c53b5d-9d7c-44d1-a742-91d37beede92" containerID="2b8f9325627489cedd0c26d21a0aa9519d8f4ac6e734e293fed59b09e2efdc8d" exitCode=0 Jan 21 00:21:28 crc kubenswrapper[5118]: I0121 00:21:28.610682 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nf6rb" event={"ID":"47c53b5d-9d7c-44d1-a742-91d37beede92","Type":"ContainerDied","Data":"2b8f9325627489cedd0c26d21a0aa9519d8f4ac6e734e293fed59b09e2efdc8d"} Jan 21 00:21:28 crc kubenswrapper[5118]: I0121 00:21:28.638580 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-85b59756dc-hxvxk" event={"ID":"cd6cbfc5-cdc4-4142-956b-c2f60a030179","Type":"ContainerStarted","Data":"9d340202fc27180cd46d90ff3f96ca64d89196a8ff2b10e08a7ebee138c7419c"} Jan 21 00:21:30 crc kubenswrapper[5118]: I0121 00:21:30.691565 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nf6rb" event={"ID":"47c53b5d-9d7c-44d1-a742-91d37beede92","Type":"ContainerStarted","Data":"2651e472c2c8b7f967145e4d0591618a122d65d77d4f1de9e128b680327ab074"} Jan 21 00:21:34 crc kubenswrapper[5118]: I0121 00:21:34.770951 5118 generic.go:358] "Generic (PLEG): container finished" podID="47c53b5d-9d7c-44d1-a742-91d37beede92" containerID="2651e472c2c8b7f967145e4d0591618a122d65d77d4f1de9e128b680327ab074" exitCode=0 Jan 21 00:21:34 crc kubenswrapper[5118]: I0121 00:21:34.771054 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nf6rb" event={"ID":"47c53b5d-9d7c-44d1-a742-91d37beede92","Type":"ContainerDied","Data":"2651e472c2c8b7f967145e4d0591618a122d65d77d4f1de9e128b680327ab074"} Jan 21 00:21:39 crc kubenswrapper[5118]: I0121 00:21:39.278089 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-dgcx5"] Jan 21 00:21:39 crc kubenswrapper[5118]: I0121 00:21:39.282780 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-dgcx5" Jan 21 00:21:39 crc kubenswrapper[5118]: I0121 00:21:39.285063 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager-operator\"/\"openshift-service-ca.crt\"" Jan 21 00:21:39 crc kubenswrapper[5118]: I0121 00:21:39.286623 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager-operator\"/\"kube-root-ca.crt\"" Jan 21 00:21:39 crc kubenswrapper[5118]: I0121 00:21:39.286911 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager-operator\"/\"cert-manager-operator-controller-manager-dockercfg-b2d8z\"" Jan 21 00:21:39 crc kubenswrapper[5118]: I0121 00:21:39.290726 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-dgcx5"] Jan 21 00:21:39 crc kubenswrapper[5118]: I0121 00:21:39.354439 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/edd1b292-13e4-40b3-8889-56a41343b200-tmp\") pod \"cert-manager-operator-controller-manager-64c74584c4-dgcx5\" (UID: \"edd1b292-13e4-40b3-8889-56a41343b200\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-dgcx5" Jan 21 00:21:39 crc kubenswrapper[5118]: I0121 00:21:39.354507 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rb5b9\" (UniqueName: \"kubernetes.io/projected/edd1b292-13e4-40b3-8889-56a41343b200-kube-api-access-rb5b9\") pod \"cert-manager-operator-controller-manager-64c74584c4-dgcx5\" (UID: \"edd1b292-13e4-40b3-8889-56a41343b200\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-dgcx5" Jan 21 00:21:39 crc kubenswrapper[5118]: I0121 00:21:39.456204 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rb5b9\" (UniqueName: \"kubernetes.io/projected/edd1b292-13e4-40b3-8889-56a41343b200-kube-api-access-rb5b9\") pod \"cert-manager-operator-controller-manager-64c74584c4-dgcx5\" (UID: \"edd1b292-13e4-40b3-8889-56a41343b200\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-dgcx5" Jan 21 00:21:39 crc kubenswrapper[5118]: I0121 00:21:39.456331 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/edd1b292-13e4-40b3-8889-56a41343b200-tmp\") pod \"cert-manager-operator-controller-manager-64c74584c4-dgcx5\" (UID: \"edd1b292-13e4-40b3-8889-56a41343b200\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-dgcx5" Jan 21 00:21:39 crc kubenswrapper[5118]: I0121 00:21:39.456806 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/edd1b292-13e4-40b3-8889-56a41343b200-tmp\") pod \"cert-manager-operator-controller-manager-64c74584c4-dgcx5\" (UID: \"edd1b292-13e4-40b3-8889-56a41343b200\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-dgcx5" Jan 21 00:21:39 crc kubenswrapper[5118]: I0121 00:21:39.519522 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rb5b9\" (UniqueName: \"kubernetes.io/projected/edd1b292-13e4-40b3-8889-56a41343b200-kube-api-access-rb5b9\") pod \"cert-manager-operator-controller-manager-64c74584c4-dgcx5\" (UID: \"edd1b292-13e4-40b3-8889-56a41343b200\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-dgcx5" Jan 21 00:21:39 crc kubenswrapper[5118]: I0121 00:21:39.615375 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-dgcx5" Jan 21 00:21:52 crc kubenswrapper[5118]: I0121 00:21:52.855005 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-dgcx5"] Jan 21 00:21:52 crc kubenswrapper[5118]: I0121 00:21:52.874843 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-77696d8df9-jdldg" event={"ID":"c8f5c4be-4fd8-4dde-b8da-13d367a1ca0a","Type":"ContainerStarted","Data":"5bcb16102ea94cfddb1c416facdcef9464f8cdb29ac97b8c0d73087fd7183ed3"} Jan 21 00:21:52 crc kubenswrapper[5118]: I0121 00:21:52.879037 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-6mrsj" event={"ID":"3b71239b-0442-4a3d-9df1-d0c8727f356b","Type":"ContainerStarted","Data":"7e259bdbb0eea22acc12e7e6b07e4c5ace14b866b3a0eb2ae9164d3d307afdc2"} Jan 21 00:21:52 crc kubenswrapper[5118]: I0121 00:21:52.881549 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-669c9f96b5-lcwrz" event={"ID":"a9b6f709-1ac0-463b-be90-11b3065eb4d9","Type":"ContainerStarted","Data":"7f8767dd31a7e26e507a296f21c0e9fb6349d4e2c925a6b08188d9e4623a9838"} Jan 21 00:21:52 crc kubenswrapper[5118]: I0121 00:21:52.882199 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operators/perses-operator-669c9f96b5-lcwrz" Jan 21 00:21:52 crc kubenswrapper[5118]: I0121 00:21:52.885821 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nf6rb" event={"ID":"47c53b5d-9d7c-44d1-a742-91d37beede92","Type":"ContainerStarted","Data":"3c8520506a654fa2f943175398e6675ee5a10a83d889409adccf5b4d3cd4a87f"} Jan 21 00:21:52 crc kubenswrapper[5118]: I0121 00:21:52.890680 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-77696d8df9-jdldg" podStartSLOduration=2.063582872 podStartE2EDuration="30.890660602s" podCreationTimestamp="2026-01-21 00:21:22 +0000 UTC" firstStartedPulling="2026-01-21 00:21:23.485324427 +0000 UTC m=+738.809571445" lastFinishedPulling="2026-01-21 00:21:52.312402147 +0000 UTC m=+767.636649175" observedRunningTime="2026-01-21 00:21:52.889990704 +0000 UTC m=+768.214237732" watchObservedRunningTime="2026-01-21 00:21:52.890660602 +0000 UTC m=+768.214907620" Jan 21 00:21:52 crc kubenswrapper[5118]: I0121 00:21:52.891056 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-85b59756dc-hxvxk" event={"ID":"cd6cbfc5-cdc4-4142-956b-c2f60a030179","Type":"ContainerStarted","Data":"52f1c2a9a45725f26e036ed9594c12671edbe1aa7c2da0ecff220f9baf09255f"} Jan 21 00:21:52 crc kubenswrapper[5118]: I0121 00:21:52.901362 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-85c68dddb-f76dw" event={"ID":"a82d3afe-1c85-447e-8430-14b7b3aa4780","Type":"ContainerStarted","Data":"3599496d22d55fba5cd8fff51c1f0bb6c37680f26f4de28e58736ff5214150f5"} Jan 21 00:21:52 crc kubenswrapper[5118]: I0121 00:21:52.901972 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operators/observability-operator-85c68dddb-f76dw" Jan 21 00:21:52 crc kubenswrapper[5118]: I0121 00:21:52.906258 5118 patch_prober.go:28] interesting pod/observability-operator-85c68dddb-f76dw container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.45:8081/healthz\": dial tcp 10.217.0.45:8081: connect: connection refused" start-of-body= Jan 21 00:21:52 crc kubenswrapper[5118]: I0121 00:21:52.906310 5118 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-85c68dddb-f76dw" podUID="a82d3afe-1c85-447e-8430-14b7b3aa4780" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.45:8081/healthz\": dial tcp 10.217.0.45:8081: connect: connection refused" Jan 21 00:21:52 crc kubenswrapper[5118]: I0121 00:21:52.910031 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-77696d8df9-rn2lf" event={"ID":"66602a58-82f5-428b-8473-2f3e878d94e5","Type":"ContainerStarted","Data":"57c123ad19473e90db2f8b33006f342de1ad7694a3a75f199ec3957559275e6c"} Jan 21 00:21:52 crc kubenswrapper[5118]: I0121 00:21:52.923066 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-nf6rb" podStartSLOduration=26.367507296 podStartE2EDuration="27.923042823s" podCreationTimestamp="2026-01-21 00:21:25 +0000 UTC" firstStartedPulling="2026-01-21 00:21:28.61123396 +0000 UTC m=+743.935480978" lastFinishedPulling="2026-01-21 00:21:30.166769487 +0000 UTC m=+745.491016505" observedRunningTime="2026-01-21 00:21:52.917879705 +0000 UTC m=+768.242126723" watchObservedRunningTime="2026-01-21 00:21:52.923042823 +0000 UTC m=+768.247289841" Jan 21 00:21:52 crc kubenswrapper[5118]: I0121 00:21:52.952305 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-6mrsj" podStartSLOduration=2.190508105 podStartE2EDuration="30.95228953s" podCreationTimestamp="2026-01-21 00:21:22 +0000 UTC" firstStartedPulling="2026-01-21 00:21:23.536394334 +0000 UTC m=+738.860641352" lastFinishedPulling="2026-01-21 00:21:52.298175759 +0000 UTC m=+767.622422777" observedRunningTime="2026-01-21 00:21:52.951845379 +0000 UTC m=+768.276092397" watchObservedRunningTime="2026-01-21 00:21:52.95228953 +0000 UTC m=+768.276536538" Jan 21 00:21:52 crc kubenswrapper[5118]: I0121 00:21:52.977362 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-669c9f96b5-lcwrz" podStartSLOduration=1.850642706 podStartE2EDuration="29.977337296s" podCreationTimestamp="2026-01-21 00:21:23 +0000 UTC" firstStartedPulling="2026-01-21 00:21:24.189527299 +0000 UTC m=+739.513774317" lastFinishedPulling="2026-01-21 00:21:52.316221889 +0000 UTC m=+767.640468907" observedRunningTime="2026-01-21 00:21:52.972780445 +0000 UTC m=+768.297027483" watchObservedRunningTime="2026-01-21 00:21:52.977337296 +0000 UTC m=+768.301584334" Jan 21 00:21:53 crc kubenswrapper[5118]: I0121 00:21:53.002150 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-85c68dddb-f76dw" podStartSLOduration=1.694496244 podStartE2EDuration="30.002126015s" podCreationTimestamp="2026-01-21 00:21:23 +0000 UTC" firstStartedPulling="2026-01-21 00:21:24.004856789 +0000 UTC m=+739.329103807" lastFinishedPulling="2026-01-21 00:21:52.31248656 +0000 UTC m=+767.636733578" observedRunningTime="2026-01-21 00:21:52.998253652 +0000 UTC m=+768.322500690" watchObservedRunningTime="2026-01-21 00:21:53.002126015 +0000 UTC m=+768.326373033" Jan 21 00:21:53 crc kubenswrapper[5118]: I0121 00:21:53.026340 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-77696d8df9-rn2lf" podStartSLOduration=2.303919491 podStartE2EDuration="31.026319639s" podCreationTimestamp="2026-01-21 00:21:22 +0000 UTC" firstStartedPulling="2026-01-21 00:21:23.608991744 +0000 UTC m=+738.933238762" lastFinishedPulling="2026-01-21 00:21:52.331391892 +0000 UTC m=+767.655638910" observedRunningTime="2026-01-21 00:21:53.022042615 +0000 UTC m=+768.346289653" watchObservedRunningTime="2026-01-21 00:21:53.026319639 +0000 UTC m=+768.350566657" Jan 21 00:21:53 crc kubenswrapper[5118]: I0121 00:21:53.073141 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elastic-operator-85b59756dc-hxvxk" podStartSLOduration=2.541237379 podStartE2EDuration="27.073124993s" podCreationTimestamp="2026-01-21 00:21:26 +0000 UTC" firstStartedPulling="2026-01-21 00:21:27.780621616 +0000 UTC m=+743.104868644" lastFinishedPulling="2026-01-21 00:21:52.31250924 +0000 UTC m=+767.636756258" observedRunningTime="2026-01-21 00:21:53.064807192 +0000 UTC m=+768.389054240" watchObservedRunningTime="2026-01-21 00:21:53.073124993 +0000 UTC m=+768.397372011" Jan 21 00:21:53 crc kubenswrapper[5118]: I0121 00:21:53.916192 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-dgcx5" event={"ID":"edd1b292-13e4-40b3-8889-56a41343b200","Type":"ContainerStarted","Data":"e57503148b6ded6b317c7328fe813e823fca3eeda1623356a729a2fd4c604fb4"} Jan 21 00:21:54 crc kubenswrapper[5118]: I0121 00:21:54.119628 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 21 00:21:54 crc kubenswrapper[5118]: I0121 00:21:54.283221 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 21 00:21:54 crc kubenswrapper[5118]: I0121 00:21:54.283403 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Jan 21 00:21:54 crc kubenswrapper[5118]: I0121 00:21:54.350560 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/3a83d7d0-ad82-4da4-8c10-269310b2e144-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"3a83d7d0-ad82-4da4-8c10-269310b2e144\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 00:21:54 crc kubenswrapper[5118]: I0121 00:21:54.350631 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/3a83d7d0-ad82-4da4-8c10-269310b2e144-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"3a83d7d0-ad82-4da4-8c10-269310b2e144\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 00:21:54 crc kubenswrapper[5118]: I0121 00:21:54.350673 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/3a83d7d0-ad82-4da4-8c10-269310b2e144-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"3a83d7d0-ad82-4da4-8c10-269310b2e144\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 00:21:54 crc kubenswrapper[5118]: I0121 00:21:54.350697 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/3a83d7d0-ad82-4da4-8c10-269310b2e144-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"3a83d7d0-ad82-4da4-8c10-269310b2e144\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 00:21:54 crc kubenswrapper[5118]: I0121 00:21:54.350728 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/3a83d7d0-ad82-4da4-8c10-269310b2e144-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"3a83d7d0-ad82-4da4-8c10-269310b2e144\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 00:21:54 crc kubenswrapper[5118]: I0121 00:21:54.350757 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/3a83d7d0-ad82-4da4-8c10-269310b2e144-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"3a83d7d0-ad82-4da4-8c10-269310b2e144\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 00:21:54 crc kubenswrapper[5118]: I0121 00:21:54.350775 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/3a83d7d0-ad82-4da4-8c10-269310b2e144-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"3a83d7d0-ad82-4da4-8c10-269310b2e144\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 00:21:54 crc kubenswrapper[5118]: I0121 00:21:54.350806 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/3a83d7d0-ad82-4da4-8c10-269310b2e144-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"3a83d7d0-ad82-4da4-8c10-269310b2e144\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 00:21:54 crc kubenswrapper[5118]: I0121 00:21:54.350827 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/3a83d7d0-ad82-4da4-8c10-269310b2e144-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"3a83d7d0-ad82-4da4-8c10-269310b2e144\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 00:21:54 crc kubenswrapper[5118]: I0121 00:21:54.350851 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/3a83d7d0-ad82-4da4-8c10-269310b2e144-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"3a83d7d0-ad82-4da4-8c10-269310b2e144\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 00:21:54 crc kubenswrapper[5118]: I0121 00:21:54.350899 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/3a83d7d0-ad82-4da4-8c10-269310b2e144-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"3a83d7d0-ad82-4da4-8c10-269310b2e144\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 00:21:54 crc kubenswrapper[5118]: I0121 00:21:54.350928 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/3a83d7d0-ad82-4da4-8c10-269310b2e144-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"3a83d7d0-ad82-4da4-8c10-269310b2e144\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 00:21:54 crc kubenswrapper[5118]: I0121 00:21:54.350951 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/3a83d7d0-ad82-4da4-8c10-269310b2e144-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"3a83d7d0-ad82-4da4-8c10-269310b2e144\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 00:21:54 crc kubenswrapper[5118]: I0121 00:21:54.350997 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/3a83d7d0-ad82-4da4-8c10-269310b2e144-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"3a83d7d0-ad82-4da4-8c10-269310b2e144\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 00:21:54 crc kubenswrapper[5118]: I0121 00:21:54.351025 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/3a83d7d0-ad82-4da4-8c10-269310b2e144-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"3a83d7d0-ad82-4da4-8c10-269310b2e144\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 00:21:54 crc kubenswrapper[5118]: I0121 00:21:54.351495 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-internal-users\"" Jan 21 00:21:54 crc kubenswrapper[5118]: I0121 00:21:54.351827 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-remote-ca\"" Jan 21 00:21:54 crc kubenswrapper[5118]: I0121 00:21:54.351996 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-xpack-file-realm\"" Jan 21 00:21:54 crc kubenswrapper[5118]: I0121 00:21:54.352173 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-http-certs-internal\"" Jan 21 00:21:54 crc kubenswrapper[5118]: I0121 00:21:54.352486 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-dockercfg-g9qcl\"" Jan 21 00:21:54 crc kubenswrapper[5118]: I0121 00:21:54.352534 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-default-es-transport-certs\"" Jan 21 00:21:54 crc kubenswrapper[5118]: I0121 00:21:54.352682 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-default-es-config\"" Jan 21 00:21:54 crc kubenswrapper[5118]: I0121 00:21:54.352957 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"elasticsearch-es-scripts\"" Jan 21 00:21:54 crc kubenswrapper[5118]: I0121 00:21:54.353221 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"elasticsearch-es-unicast-hosts\"" Jan 21 00:21:54 crc kubenswrapper[5118]: I0121 00:21:54.452352 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/3a83d7d0-ad82-4da4-8c10-269310b2e144-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"3a83d7d0-ad82-4da4-8c10-269310b2e144\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 00:21:54 crc kubenswrapper[5118]: I0121 00:21:54.452413 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/3a83d7d0-ad82-4da4-8c10-269310b2e144-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"3a83d7d0-ad82-4da4-8c10-269310b2e144\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 00:21:54 crc kubenswrapper[5118]: I0121 00:21:54.452436 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/3a83d7d0-ad82-4da4-8c10-269310b2e144-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"3a83d7d0-ad82-4da4-8c10-269310b2e144\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 00:21:54 crc kubenswrapper[5118]: I0121 00:21:54.452475 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/3a83d7d0-ad82-4da4-8c10-269310b2e144-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"3a83d7d0-ad82-4da4-8c10-269310b2e144\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 00:21:54 crc kubenswrapper[5118]: I0121 00:21:54.452496 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/3a83d7d0-ad82-4da4-8c10-269310b2e144-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"3a83d7d0-ad82-4da4-8c10-269310b2e144\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 00:21:54 crc kubenswrapper[5118]: I0121 00:21:54.452523 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/3a83d7d0-ad82-4da4-8c10-269310b2e144-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"3a83d7d0-ad82-4da4-8c10-269310b2e144\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 00:21:54 crc kubenswrapper[5118]: I0121 00:21:54.452581 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/3a83d7d0-ad82-4da4-8c10-269310b2e144-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"3a83d7d0-ad82-4da4-8c10-269310b2e144\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 00:21:54 crc kubenswrapper[5118]: I0121 00:21:54.452614 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/3a83d7d0-ad82-4da4-8c10-269310b2e144-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"3a83d7d0-ad82-4da4-8c10-269310b2e144\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 00:21:54 crc kubenswrapper[5118]: I0121 00:21:54.452637 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/3a83d7d0-ad82-4da4-8c10-269310b2e144-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"3a83d7d0-ad82-4da4-8c10-269310b2e144\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 00:21:54 crc kubenswrapper[5118]: I0121 00:21:54.452709 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/3a83d7d0-ad82-4da4-8c10-269310b2e144-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"3a83d7d0-ad82-4da4-8c10-269310b2e144\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 00:21:54 crc kubenswrapper[5118]: I0121 00:21:54.452739 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/3a83d7d0-ad82-4da4-8c10-269310b2e144-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"3a83d7d0-ad82-4da4-8c10-269310b2e144\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 00:21:54 crc kubenswrapper[5118]: I0121 00:21:54.452765 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/3a83d7d0-ad82-4da4-8c10-269310b2e144-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"3a83d7d0-ad82-4da4-8c10-269310b2e144\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 00:21:54 crc kubenswrapper[5118]: I0121 00:21:54.452818 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/3a83d7d0-ad82-4da4-8c10-269310b2e144-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"3a83d7d0-ad82-4da4-8c10-269310b2e144\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 00:21:54 crc kubenswrapper[5118]: I0121 00:21:54.452849 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/3a83d7d0-ad82-4da4-8c10-269310b2e144-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"3a83d7d0-ad82-4da4-8c10-269310b2e144\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 00:21:54 crc kubenswrapper[5118]: I0121 00:21:54.452874 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/3a83d7d0-ad82-4da4-8c10-269310b2e144-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"3a83d7d0-ad82-4da4-8c10-269310b2e144\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 00:21:54 crc kubenswrapper[5118]: I0121 00:21:54.453392 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/3a83d7d0-ad82-4da4-8c10-269310b2e144-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"3a83d7d0-ad82-4da4-8c10-269310b2e144\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 00:21:54 crc kubenswrapper[5118]: I0121 00:21:54.454825 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/3a83d7d0-ad82-4da4-8c10-269310b2e144-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"3a83d7d0-ad82-4da4-8c10-269310b2e144\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 00:21:54 crc kubenswrapper[5118]: I0121 00:21:54.455148 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/3a83d7d0-ad82-4da4-8c10-269310b2e144-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"3a83d7d0-ad82-4da4-8c10-269310b2e144\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 00:21:54 crc kubenswrapper[5118]: I0121 00:21:54.455609 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/3a83d7d0-ad82-4da4-8c10-269310b2e144-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"3a83d7d0-ad82-4da4-8c10-269310b2e144\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 00:21:54 crc kubenswrapper[5118]: I0121 00:21:54.461081 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/3a83d7d0-ad82-4da4-8c10-269310b2e144-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"3a83d7d0-ad82-4da4-8c10-269310b2e144\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 00:21:54 crc kubenswrapper[5118]: I0121 00:21:54.461642 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/3a83d7d0-ad82-4da4-8c10-269310b2e144-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"3a83d7d0-ad82-4da4-8c10-269310b2e144\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 00:21:54 crc kubenswrapper[5118]: I0121 00:21:54.462492 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/3a83d7d0-ad82-4da4-8c10-269310b2e144-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"3a83d7d0-ad82-4da4-8c10-269310b2e144\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 00:21:54 crc kubenswrapper[5118]: I0121 00:21:54.469709 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/3a83d7d0-ad82-4da4-8c10-269310b2e144-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"3a83d7d0-ad82-4da4-8c10-269310b2e144\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 00:21:54 crc kubenswrapper[5118]: I0121 00:21:54.473464 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/3a83d7d0-ad82-4da4-8c10-269310b2e144-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"3a83d7d0-ad82-4da4-8c10-269310b2e144\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 00:21:54 crc kubenswrapper[5118]: I0121 00:21:54.473984 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/3a83d7d0-ad82-4da4-8c10-269310b2e144-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"3a83d7d0-ad82-4da4-8c10-269310b2e144\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 00:21:54 crc kubenswrapper[5118]: I0121 00:21:54.474902 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/3a83d7d0-ad82-4da4-8c10-269310b2e144-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"3a83d7d0-ad82-4da4-8c10-269310b2e144\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 00:21:54 crc kubenswrapper[5118]: I0121 00:21:54.482781 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/3a83d7d0-ad82-4da4-8c10-269310b2e144-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"3a83d7d0-ad82-4da4-8c10-269310b2e144\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 00:21:54 crc kubenswrapper[5118]: I0121 00:21:54.486642 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/3a83d7d0-ad82-4da4-8c10-269310b2e144-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"3a83d7d0-ad82-4da4-8c10-269310b2e144\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 00:21:54 crc kubenswrapper[5118]: I0121 00:21:54.489715 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/3a83d7d0-ad82-4da4-8c10-269310b2e144-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"3a83d7d0-ad82-4da4-8c10-269310b2e144\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 00:21:54 crc kubenswrapper[5118]: I0121 00:21:54.492937 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/3a83d7d0-ad82-4da4-8c10-269310b2e144-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"3a83d7d0-ad82-4da4-8c10-269310b2e144\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 00:21:54 crc kubenswrapper[5118]: I0121 00:21:54.527378 5118 patch_prober.go:28] interesting pod/observability-operator-85c68dddb-f76dw container/operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.0.45:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 00:21:54 crc kubenswrapper[5118]: I0121 00:21:54.527453 5118 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-operators/observability-operator-85c68dddb-f76dw" podUID="a82d3afe-1c85-447e-8430-14b7b3aa4780" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.45:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 00:21:54 crc kubenswrapper[5118]: I0121 00:21:54.671536 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Jan 21 00:21:54 crc kubenswrapper[5118]: I0121 00:21:54.834578 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-85c68dddb-f76dw" Jan 21 00:21:55 crc kubenswrapper[5118]: I0121 00:21:55.243316 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 21 00:21:55 crc kubenswrapper[5118]: I0121 00:21:55.936397 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"3a83d7d0-ad82-4da4-8c10-269310b2e144","Type":"ContainerStarted","Data":"33b176cca9e53190d64298b673ee49b22c18561cbc8f1764f0f68910bc351c4c"} Jan 21 00:21:56 crc kubenswrapper[5118]: I0121 00:21:56.628834 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-nf6rb" Jan 21 00:21:56 crc kubenswrapper[5118]: I0121 00:21:56.628895 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-nf6rb" Jan 21 00:21:57 crc kubenswrapper[5118]: I0121 00:21:57.676944 5118 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nf6rb" podUID="47c53b5d-9d7c-44d1-a742-91d37beede92" containerName="registry-server" probeResult="failure" output=< Jan 21 00:21:57 crc kubenswrapper[5118]: timeout: failed to connect service ":50051" within 1s Jan 21 00:21:57 crc kubenswrapper[5118]: > Jan 21 00:22:00 crc kubenswrapper[5118]: I0121 00:22:00.121487 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29482582-t6ttq"] Jan 21 00:22:00 crc kubenswrapper[5118]: I0121 00:22:00.128096 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482582-t6ttq" Jan 21 00:22:00 crc kubenswrapper[5118]: I0121 00:22:00.129952 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-tk7qc\"" Jan 21 00:22:00 crc kubenswrapper[5118]: I0121 00:22:00.130206 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 00:22:00 crc kubenswrapper[5118]: I0121 00:22:00.130325 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 00:22:00 crc kubenswrapper[5118]: I0121 00:22:00.178529 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29482582-t6ttq"] Jan 21 00:22:00 crc kubenswrapper[5118]: I0121 00:22:00.179758 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxq8j\" (UniqueName: \"kubernetes.io/projected/0f097a29-16a1-4f56-873e-7bdd4ee1e659-kube-api-access-jxq8j\") pod \"auto-csr-approver-29482582-t6ttq\" (UID: \"0f097a29-16a1-4f56-873e-7bdd4ee1e659\") " pod="openshift-infra/auto-csr-approver-29482582-t6ttq" Jan 21 00:22:00 crc kubenswrapper[5118]: I0121 00:22:00.281676 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jxq8j\" (UniqueName: \"kubernetes.io/projected/0f097a29-16a1-4f56-873e-7bdd4ee1e659-kube-api-access-jxq8j\") pod \"auto-csr-approver-29482582-t6ttq\" (UID: \"0f097a29-16a1-4f56-873e-7bdd4ee1e659\") " pod="openshift-infra/auto-csr-approver-29482582-t6ttq" Jan 21 00:22:00 crc kubenswrapper[5118]: I0121 00:22:00.307842 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxq8j\" (UniqueName: \"kubernetes.io/projected/0f097a29-16a1-4f56-873e-7bdd4ee1e659-kube-api-access-jxq8j\") pod \"auto-csr-approver-29482582-t6ttq\" (UID: \"0f097a29-16a1-4f56-873e-7bdd4ee1e659\") " pod="openshift-infra/auto-csr-approver-29482582-t6ttq" Jan 21 00:22:00 crc kubenswrapper[5118]: I0121 00:22:00.480758 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482582-t6ttq" Jan 21 00:22:04 crc kubenswrapper[5118]: I0121 00:22:04.927861 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-669c9f96b5-lcwrz" Jan 21 00:22:06 crc kubenswrapper[5118]: I0121 00:22:06.794881 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-nf6rb" Jan 21 00:22:06 crc kubenswrapper[5118]: I0121 00:22:06.928179 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-nf6rb" Jan 21 00:22:07 crc kubenswrapper[5118]: I0121 00:22:07.112773 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nf6rb"] Jan 21 00:22:08 crc kubenswrapper[5118]: I0121 00:22:08.059093 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-nf6rb" podUID="47c53b5d-9d7c-44d1-a742-91d37beede92" containerName="registry-server" containerID="cri-o://3c8520506a654fa2f943175398e6675ee5a10a83d889409adccf5b4d3cd4a87f" gracePeriod=2 Jan 21 00:22:09 crc kubenswrapper[5118]: I0121 00:22:09.069042 5118 generic.go:358] "Generic (PLEG): container finished" podID="47c53b5d-9d7c-44d1-a742-91d37beede92" containerID="3c8520506a654fa2f943175398e6675ee5a10a83d889409adccf5b4d3cd4a87f" exitCode=0 Jan 21 00:22:09 crc kubenswrapper[5118]: I0121 00:22:09.069127 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nf6rb" event={"ID":"47c53b5d-9d7c-44d1-a742-91d37beede92","Type":"ContainerDied","Data":"3c8520506a654fa2f943175398e6675ee5a10a83d889409adccf5b4d3cd4a87f"} Jan 21 00:22:11 crc kubenswrapper[5118]: I0121 00:22:11.096913 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nf6rb" event={"ID":"47c53b5d-9d7c-44d1-a742-91d37beede92","Type":"ContainerDied","Data":"363516e9f4ff80cad5a59c266ea35121f3e0b2184334d753224bcb804d1187e5"} Jan 21 00:22:11 crc kubenswrapper[5118]: I0121 00:22:11.096952 5118 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="363516e9f4ff80cad5a59c266ea35121f3e0b2184334d753224bcb804d1187e5" Jan 21 00:22:11 crc kubenswrapper[5118]: I0121 00:22:11.124678 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nf6rb" Jan 21 00:22:11 crc kubenswrapper[5118]: I0121 00:22:11.213078 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ktrsr\" (UniqueName: \"kubernetes.io/projected/47c53b5d-9d7c-44d1-a742-91d37beede92-kube-api-access-ktrsr\") pod \"47c53b5d-9d7c-44d1-a742-91d37beede92\" (UID: \"47c53b5d-9d7c-44d1-a742-91d37beede92\") " Jan 21 00:22:11 crc kubenswrapper[5118]: I0121 00:22:11.213301 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47c53b5d-9d7c-44d1-a742-91d37beede92-utilities\") pod \"47c53b5d-9d7c-44d1-a742-91d37beede92\" (UID: \"47c53b5d-9d7c-44d1-a742-91d37beede92\") " Jan 21 00:22:11 crc kubenswrapper[5118]: I0121 00:22:11.214341 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47c53b5d-9d7c-44d1-a742-91d37beede92-utilities" (OuterVolumeSpecName: "utilities") pod "47c53b5d-9d7c-44d1-a742-91d37beede92" (UID: "47c53b5d-9d7c-44d1-a742-91d37beede92"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 00:22:11 crc kubenswrapper[5118]: I0121 00:22:11.214449 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47c53b5d-9d7c-44d1-a742-91d37beede92-catalog-content\") pod \"47c53b5d-9d7c-44d1-a742-91d37beede92\" (UID: \"47c53b5d-9d7c-44d1-a742-91d37beede92\") " Jan 21 00:22:11 crc kubenswrapper[5118]: I0121 00:22:11.214784 5118 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47c53b5d-9d7c-44d1-a742-91d37beede92-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 00:22:11 crc kubenswrapper[5118]: I0121 00:22:11.222902 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47c53b5d-9d7c-44d1-a742-91d37beede92-kube-api-access-ktrsr" (OuterVolumeSpecName: "kube-api-access-ktrsr") pod "47c53b5d-9d7c-44d1-a742-91d37beede92" (UID: "47c53b5d-9d7c-44d1-a742-91d37beede92"). InnerVolumeSpecName "kube-api-access-ktrsr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:22:11 crc kubenswrapper[5118]: I0121 00:22:11.316928 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47c53b5d-9d7c-44d1-a742-91d37beede92-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "47c53b5d-9d7c-44d1-a742-91d37beede92" (UID: "47c53b5d-9d7c-44d1-a742-91d37beede92"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 00:22:11 crc kubenswrapper[5118]: I0121 00:22:11.317001 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ktrsr\" (UniqueName: \"kubernetes.io/projected/47c53b5d-9d7c-44d1-a742-91d37beede92-kube-api-access-ktrsr\") on node \"crc\" DevicePath \"\"" Jan 21 00:22:11 crc kubenswrapper[5118]: I0121 00:22:11.418722 5118 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47c53b5d-9d7c-44d1-a742-91d37beede92-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 00:22:12 crc kubenswrapper[5118]: I0121 00:22:12.102719 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nf6rb" Jan 21 00:22:12 crc kubenswrapper[5118]: I0121 00:22:12.142187 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nf6rb"] Jan 21 00:22:12 crc kubenswrapper[5118]: I0121 00:22:12.147042 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-nf6rb"] Jan 21 00:22:12 crc kubenswrapper[5118]: I0121 00:22:12.985631 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47c53b5d-9d7c-44d1-a742-91d37beede92" path="/var/lib/kubelet/pods/47c53b5d-9d7c-44d1-a742-91d37beede92/volumes" Jan 21 00:22:15 crc kubenswrapper[5118]: I0121 00:22:15.879759 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29482582-t6ttq"] Jan 21 00:22:15 crc kubenswrapper[5118]: W0121 00:22:15.907880 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0f097a29_16a1_4f56_873e_7bdd4ee1e659.slice/crio-29e5e372939872356179d64cea1d0b4036bba7e86ca62b7856b723d4833281bc WatchSource:0}: Error finding container 29e5e372939872356179d64cea1d0b4036bba7e86ca62b7856b723d4833281bc: Status 404 returned error can't find the container with id 29e5e372939872356179d64cea1d0b4036bba7e86ca62b7856b723d4833281bc Jan 21 00:22:16 crc kubenswrapper[5118]: I0121 00:22:16.127954 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29482582-t6ttq" event={"ID":"0f097a29-16a1-4f56-873e-7bdd4ee1e659","Type":"ContainerStarted","Data":"29e5e372939872356179d64cea1d0b4036bba7e86ca62b7856b723d4833281bc"} Jan 21 00:22:17 crc kubenswrapper[5118]: I0121 00:22:17.138098 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"3a83d7d0-ad82-4da4-8c10-269310b2e144","Type":"ContainerStarted","Data":"5dadd153e931ebc86656f3b72f5acf5417fe609e327c7d269f7f863022394ebf"} Jan 21 00:22:17 crc kubenswrapper[5118]: I0121 00:22:17.140401 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-dgcx5" event={"ID":"edd1b292-13e4-40b3-8889-56a41343b200","Type":"ContainerStarted","Data":"701364214d0b50632515ad9e1aa7490ac066ec1dd1f51c5afef9600eb4326d3e"} Jan 21 00:22:17 crc kubenswrapper[5118]: I0121 00:22:17.346435 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-dgcx5" podStartSLOduration=15.673905094 podStartE2EDuration="38.346417145s" podCreationTimestamp="2026-01-21 00:21:39 +0000 UTC" firstStartedPulling="2026-01-21 00:21:52.870966488 +0000 UTC m=+768.195213496" lastFinishedPulling="2026-01-21 00:22:15.543478529 +0000 UTC m=+790.867725547" observedRunningTime="2026-01-21 00:22:17.344782171 +0000 UTC m=+792.669029179" watchObservedRunningTime="2026-01-21 00:22:17.346417145 +0000 UTC m=+792.670664183" Jan 21 00:22:17 crc kubenswrapper[5118]: I0121 00:22:17.506728 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 21 00:22:17 crc kubenswrapper[5118]: I0121 00:22:17.541176 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 21 00:22:18 crc kubenswrapper[5118]: I0121 00:22:18.147201 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29482582-t6ttq" event={"ID":"0f097a29-16a1-4f56-873e-7bdd4ee1e659","Type":"ContainerStarted","Data":"a1a8ac72acddc229e2e3eee4429a00ce12772e2fb7c232afa4a91ad745b8570d"} Jan 21 00:22:18 crc kubenswrapper[5118]: I0121 00:22:18.165911 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29482582-t6ttq" podStartSLOduration=17.075108909 podStartE2EDuration="18.165890152s" podCreationTimestamp="2026-01-21 00:22:00 +0000 UTC" firstStartedPulling="2026-01-21 00:22:15.910924853 +0000 UTC m=+791.235171871" lastFinishedPulling="2026-01-21 00:22:17.001706096 +0000 UTC m=+792.325953114" observedRunningTime="2026-01-21 00:22:18.162702787 +0000 UTC m=+793.486949825" watchObservedRunningTime="2026-01-21 00:22:18.165890152 +0000 UTC m=+793.490137170" Jan 21 00:22:19 crc kubenswrapper[5118]: I0121 00:22:19.156195 5118 generic.go:358] "Generic (PLEG): container finished" podID="3a83d7d0-ad82-4da4-8c10-269310b2e144" containerID="5dadd153e931ebc86656f3b72f5acf5417fe609e327c7d269f7f863022394ebf" exitCode=0 Jan 21 00:22:19 crc kubenswrapper[5118]: I0121 00:22:19.156270 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"3a83d7d0-ad82-4da4-8c10-269310b2e144","Type":"ContainerDied","Data":"5dadd153e931ebc86656f3b72f5acf5417fe609e327c7d269f7f863022394ebf"} Jan 21 00:22:19 crc kubenswrapper[5118]: I0121 00:22:19.161355 5118 generic.go:358] "Generic (PLEG): container finished" podID="0f097a29-16a1-4f56-873e-7bdd4ee1e659" containerID="a1a8ac72acddc229e2e3eee4429a00ce12772e2fb7c232afa4a91ad745b8570d" exitCode=0 Jan 21 00:22:19 crc kubenswrapper[5118]: I0121 00:22:19.161446 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29482582-t6ttq" event={"ID":"0f097a29-16a1-4f56-873e-7bdd4ee1e659","Type":"ContainerDied","Data":"a1a8ac72acddc229e2e3eee4429a00ce12772e2fb7c232afa4a91ad745b8570d"} Jan 21 00:22:19 crc kubenswrapper[5118]: I0121 00:22:19.351541 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-7894b5b9b4-q9qmg"] Jan 21 00:22:19 crc kubenswrapper[5118]: I0121 00:22:19.352098 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="47c53b5d-9d7c-44d1-a742-91d37beede92" containerName="extract-utilities" Jan 21 00:22:19 crc kubenswrapper[5118]: I0121 00:22:19.352114 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="47c53b5d-9d7c-44d1-a742-91d37beede92" containerName="extract-utilities" Jan 21 00:22:19 crc kubenswrapper[5118]: I0121 00:22:19.352137 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="47c53b5d-9d7c-44d1-a742-91d37beede92" containerName="registry-server" Jan 21 00:22:19 crc kubenswrapper[5118]: I0121 00:22:19.352143 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="47c53b5d-9d7c-44d1-a742-91d37beede92" containerName="registry-server" Jan 21 00:22:19 crc kubenswrapper[5118]: I0121 00:22:19.352189 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="47c53b5d-9d7c-44d1-a742-91d37beede92" containerName="extract-content" Jan 21 00:22:19 crc kubenswrapper[5118]: I0121 00:22:19.352195 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="47c53b5d-9d7c-44d1-a742-91d37beede92" containerName="extract-content" Jan 21 00:22:19 crc kubenswrapper[5118]: I0121 00:22:19.352290 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="47c53b5d-9d7c-44d1-a742-91d37beede92" containerName="registry-server" Jan 21 00:22:19 crc kubenswrapper[5118]: I0121 00:22:19.355594 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-7894b5b9b4-q9qmg" Jan 21 00:22:19 crc kubenswrapper[5118]: I0121 00:22:19.357121 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-webhook-dockercfg-d2vjh\"" Jan 21 00:22:19 crc kubenswrapper[5118]: I0121 00:22:19.357550 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager\"/\"kube-root-ca.crt\"" Jan 21 00:22:19 crc kubenswrapper[5118]: I0121 00:22:19.357720 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager\"/\"openshift-service-ca.crt\"" Jan 21 00:22:19 crc kubenswrapper[5118]: I0121 00:22:19.359468 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-7894b5b9b4-q9qmg"] Jan 21 00:22:19 crc kubenswrapper[5118]: I0121 00:22:19.380458 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a1ed4705-4ddb-4ee6-bce2-c8b90c8a459b-bound-sa-token\") pod \"cert-manager-webhook-7894b5b9b4-q9qmg\" (UID: \"a1ed4705-4ddb-4ee6-bce2-c8b90c8a459b\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-q9qmg" Jan 21 00:22:19 crc kubenswrapper[5118]: I0121 00:22:19.380549 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5r8qq\" (UniqueName: \"kubernetes.io/projected/a1ed4705-4ddb-4ee6-bce2-c8b90c8a459b-kube-api-access-5r8qq\") pod \"cert-manager-webhook-7894b5b9b4-q9qmg\" (UID: \"a1ed4705-4ddb-4ee6-bce2-c8b90c8a459b\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-q9qmg" Jan 21 00:22:19 crc kubenswrapper[5118]: I0121 00:22:19.481295 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a1ed4705-4ddb-4ee6-bce2-c8b90c8a459b-bound-sa-token\") pod \"cert-manager-webhook-7894b5b9b4-q9qmg\" (UID: \"a1ed4705-4ddb-4ee6-bce2-c8b90c8a459b\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-q9qmg" Jan 21 00:22:19 crc kubenswrapper[5118]: I0121 00:22:19.481563 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5r8qq\" (UniqueName: \"kubernetes.io/projected/a1ed4705-4ddb-4ee6-bce2-c8b90c8a459b-kube-api-access-5r8qq\") pod \"cert-manager-webhook-7894b5b9b4-q9qmg\" (UID: \"a1ed4705-4ddb-4ee6-bce2-c8b90c8a459b\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-q9qmg" Jan 21 00:22:19 crc kubenswrapper[5118]: I0121 00:22:19.500150 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a1ed4705-4ddb-4ee6-bce2-c8b90c8a459b-bound-sa-token\") pod \"cert-manager-webhook-7894b5b9b4-q9qmg\" (UID: \"a1ed4705-4ddb-4ee6-bce2-c8b90c8a459b\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-q9qmg" Jan 21 00:22:19 crc kubenswrapper[5118]: I0121 00:22:19.500231 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5r8qq\" (UniqueName: \"kubernetes.io/projected/a1ed4705-4ddb-4ee6-bce2-c8b90c8a459b-kube-api-access-5r8qq\") pod \"cert-manager-webhook-7894b5b9b4-q9qmg\" (UID: \"a1ed4705-4ddb-4ee6-bce2-c8b90c8a459b\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-q9qmg" Jan 21 00:22:19 crc kubenswrapper[5118]: I0121 00:22:19.722122 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-7894b5b9b4-q9qmg" Jan 21 00:22:20 crc kubenswrapper[5118]: I0121 00:22:20.164332 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-7894b5b9b4-q9qmg"] Jan 21 00:22:20 crc kubenswrapper[5118]: I0121 00:22:20.170201 5118 generic.go:358] "Generic (PLEG): container finished" podID="3a83d7d0-ad82-4da4-8c10-269310b2e144" containerID="d2c67ad54cfb025ef33e6cc660ad666577434cbd026c90fea7e92fd7567e4018" exitCode=0 Jan 21 00:22:20 crc kubenswrapper[5118]: I0121 00:22:20.170484 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"3a83d7d0-ad82-4da4-8c10-269310b2e144","Type":"ContainerDied","Data":"d2c67ad54cfb025ef33e6cc660ad666577434cbd026c90fea7e92fd7567e4018"} Jan 21 00:22:20 crc kubenswrapper[5118]: W0121 00:22:20.173016 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda1ed4705_4ddb_4ee6_bce2_c8b90c8a459b.slice/crio-b42d84cab59a0caa6760ad515f7f5264964d05ecfc114e26544b2b5abb5600ab WatchSource:0}: Error finding container b42d84cab59a0caa6760ad515f7f5264964d05ecfc114e26544b2b5abb5600ab: Status 404 returned error can't find the container with id b42d84cab59a0caa6760ad515f7f5264964d05ecfc114e26544b2b5abb5600ab Jan 21 00:22:20 crc kubenswrapper[5118]: I0121 00:22:20.426610 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482582-t6ttq" Jan 21 00:22:20 crc kubenswrapper[5118]: I0121 00:22:20.494549 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jxq8j\" (UniqueName: \"kubernetes.io/projected/0f097a29-16a1-4f56-873e-7bdd4ee1e659-kube-api-access-jxq8j\") pod \"0f097a29-16a1-4f56-873e-7bdd4ee1e659\" (UID: \"0f097a29-16a1-4f56-873e-7bdd4ee1e659\") " Jan 21 00:22:20 crc kubenswrapper[5118]: I0121 00:22:20.504276 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f097a29-16a1-4f56-873e-7bdd4ee1e659-kube-api-access-jxq8j" (OuterVolumeSpecName: "kube-api-access-jxq8j") pod "0f097a29-16a1-4f56-873e-7bdd4ee1e659" (UID: "0f097a29-16a1-4f56-873e-7bdd4ee1e659"). InnerVolumeSpecName "kube-api-access-jxq8j". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:22:20 crc kubenswrapper[5118]: I0121 00:22:20.596750 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jxq8j\" (UniqueName: \"kubernetes.io/projected/0f097a29-16a1-4f56-873e-7bdd4ee1e659-kube-api-access-jxq8j\") on node \"crc\" DevicePath \"\"" Jan 21 00:22:21 crc kubenswrapper[5118]: I0121 00:22:21.176369 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482582-t6ttq" Jan 21 00:22:21 crc kubenswrapper[5118]: I0121 00:22:21.176379 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29482582-t6ttq" event={"ID":"0f097a29-16a1-4f56-873e-7bdd4ee1e659","Type":"ContainerDied","Data":"29e5e372939872356179d64cea1d0b4036bba7e86ca62b7856b723d4833281bc"} Jan 21 00:22:21 crc kubenswrapper[5118]: I0121 00:22:21.176409 5118 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="29e5e372939872356179d64cea1d0b4036bba7e86ca62b7856b723d4833281bc" Jan 21 00:22:21 crc kubenswrapper[5118]: I0121 00:22:21.180749 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"3a83d7d0-ad82-4da4-8c10-269310b2e144","Type":"ContainerStarted","Data":"83813d3624d576bd7929c8d61d2a23f11812e91641d59a7640805e83d439f500"} Jan 21 00:22:21 crc kubenswrapper[5118]: I0121 00:22:21.181076 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="service-telemetry/elasticsearch-es-default-0" Jan 21 00:22:21 crc kubenswrapper[5118]: I0121 00:22:21.182078 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-7894b5b9b4-q9qmg" event={"ID":"a1ed4705-4ddb-4ee6-bce2-c8b90c8a459b","Type":"ContainerStarted","Data":"b42d84cab59a0caa6760ad515f7f5264964d05ecfc114e26544b2b5abb5600ab"} Jan 21 00:22:21 crc kubenswrapper[5118]: I0121 00:22:21.213761 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29482576-pvc4n"] Jan 21 00:22:21 crc kubenswrapper[5118]: I0121 00:22:21.217357 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29482576-pvc4n"] Jan 21 00:22:21 crc kubenswrapper[5118]: I0121 00:22:21.222008 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elasticsearch-es-default-0" podStartSLOduration=6.288898639 podStartE2EDuration="27.221991251s" podCreationTimestamp="2026-01-21 00:21:54 +0000 UTC" firstStartedPulling="2026-01-21 00:21:55.282545345 +0000 UTC m=+770.606792363" lastFinishedPulling="2026-01-21 00:22:16.215637957 +0000 UTC m=+791.539884975" observedRunningTime="2026-01-21 00:22:21.218316583 +0000 UTC m=+796.542563621" watchObservedRunningTime="2026-01-21 00:22:21.221991251 +0000 UTC m=+796.546238269" Jan 21 00:22:22 crc kubenswrapper[5118]: I0121 00:22:22.930259 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7dbf76d5c8-7cfzv"] Jan 21 00:22:22 crc kubenswrapper[5118]: I0121 00:22:22.931146 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0f097a29-16a1-4f56-873e-7bdd4ee1e659" containerName="oc" Jan 21 00:22:22 crc kubenswrapper[5118]: I0121 00:22:22.931194 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f097a29-16a1-4f56-873e-7bdd4ee1e659" containerName="oc" Jan 21 00:22:22 crc kubenswrapper[5118]: I0121 00:22:22.931306 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="0f097a29-16a1-4f56-873e-7bdd4ee1e659" containerName="oc" Jan 21 00:22:23 crc kubenswrapper[5118]: I0121 00:22:23.109693 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7dbf76d5c8-7cfzv"] Jan 21 00:22:23 crc kubenswrapper[5118]: I0121 00:22:23.113012 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-7cfzv" Jan 21 00:22:23 crc kubenswrapper[5118]: I0121 00:22:23.116255 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-cainjector-dockercfg-2cxk5\"" Jan 21 00:22:23 crc kubenswrapper[5118]: I0121 00:22:23.119728 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f1c0214-a0cf-41c8-b79e-c7d666c4d7ce" path="/var/lib/kubelet/pods/2f1c0214-a0cf-41c8-b79e-c7d666c4d7ce/volumes" Jan 21 00:22:23 crc kubenswrapper[5118]: I0121 00:22:23.245573 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gr4lw\" (UniqueName: \"kubernetes.io/projected/54dd7013-936e-44cb-92df-0e4ed02dd3ba-kube-api-access-gr4lw\") pod \"cert-manager-cainjector-7dbf76d5c8-7cfzv\" (UID: \"54dd7013-936e-44cb-92df-0e4ed02dd3ba\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-7cfzv" Jan 21 00:22:23 crc kubenswrapper[5118]: I0121 00:22:23.245726 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/54dd7013-936e-44cb-92df-0e4ed02dd3ba-bound-sa-token\") pod \"cert-manager-cainjector-7dbf76d5c8-7cfzv\" (UID: \"54dd7013-936e-44cb-92df-0e4ed02dd3ba\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-7cfzv" Jan 21 00:22:23 crc kubenswrapper[5118]: I0121 00:22:23.347010 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gr4lw\" (UniqueName: \"kubernetes.io/projected/54dd7013-936e-44cb-92df-0e4ed02dd3ba-kube-api-access-gr4lw\") pod \"cert-manager-cainjector-7dbf76d5c8-7cfzv\" (UID: \"54dd7013-936e-44cb-92df-0e4ed02dd3ba\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-7cfzv" Jan 21 00:22:23 crc kubenswrapper[5118]: I0121 00:22:23.347103 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/54dd7013-936e-44cb-92df-0e4ed02dd3ba-bound-sa-token\") pod \"cert-manager-cainjector-7dbf76d5c8-7cfzv\" (UID: \"54dd7013-936e-44cb-92df-0e4ed02dd3ba\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-7cfzv" Jan 21 00:22:23 crc kubenswrapper[5118]: I0121 00:22:23.367310 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/54dd7013-936e-44cb-92df-0e4ed02dd3ba-bound-sa-token\") pod \"cert-manager-cainjector-7dbf76d5c8-7cfzv\" (UID: \"54dd7013-936e-44cb-92df-0e4ed02dd3ba\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-7cfzv" Jan 21 00:22:23 crc kubenswrapper[5118]: I0121 00:22:23.370387 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gr4lw\" (UniqueName: \"kubernetes.io/projected/54dd7013-936e-44cb-92df-0e4ed02dd3ba-kube-api-access-gr4lw\") pod \"cert-manager-cainjector-7dbf76d5c8-7cfzv\" (UID: \"54dd7013-936e-44cb-92df-0e4ed02dd3ba\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-7cfzv" Jan 21 00:22:23 crc kubenswrapper[5118]: I0121 00:22:23.458222 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-7cfzv" Jan 21 00:22:24 crc kubenswrapper[5118]: I0121 00:22:24.121961 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7dbf76d5c8-7cfzv"] Jan 21 00:22:24 crc kubenswrapper[5118]: W0121 00:22:24.132126 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod54dd7013_936e_44cb_92df_0e4ed02dd3ba.slice/crio-f5ff4916496ac3dd103bf50913d392d259877df1d39d6918c588a6beb6d07015 WatchSource:0}: Error finding container f5ff4916496ac3dd103bf50913d392d259877df1d39d6918c588a6beb6d07015: Status 404 returned error can't find the container with id f5ff4916496ac3dd103bf50913d392d259877df1d39d6918c588a6beb6d07015 Jan 21 00:22:24 crc kubenswrapper[5118]: I0121 00:22:24.205407 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-7cfzv" event={"ID":"54dd7013-936e-44cb-92df-0e4ed02dd3ba","Type":"ContainerStarted","Data":"f5ff4916496ac3dd103bf50913d392d259877df1d39d6918c588a6beb6d07015"} Jan 21 00:22:31 crc kubenswrapper[5118]: I0121 00:22:31.886223 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head"] Jan 21 00:22:31 crc kubenswrapper[5118]: I0121 00:22:31.891424 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Jan 21 00:22:31 crc kubenswrapper[5118]: I0121 00:22:31.894601 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-catalog-configmap-partition-1\"" Jan 21 00:22:31 crc kubenswrapper[5118]: I0121 00:22:31.898877 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head"] Jan 21 00:22:31 crc kubenswrapper[5118]: I0121 00:22:31.979808 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"smart-gateway-operator-catalog-configmap-partition-1-unzip\" (UniqueName: \"kubernetes.io/empty-dir/e352d8be-c25b-4892-b368-9816c38e151c-smart-gateway-operator-catalog-configmap-partition-1-unzip\") pod \"infrawatch-operators-smart-gateway-operator-bundle-nightly-head\" (UID: \"e352d8be-c25b-4892-b368-9816c38e151c\") " pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Jan 21 00:22:31 crc kubenswrapper[5118]: I0121 00:22:31.980046 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"smart-gateway-operator-catalog-configmap-partition-1-volume\" (UniqueName: \"kubernetes.io/configmap/e352d8be-c25b-4892-b368-9816c38e151c-smart-gateway-operator-catalog-configmap-partition-1-volume\") pod \"infrawatch-operators-smart-gateway-operator-bundle-nightly-head\" (UID: \"e352d8be-c25b-4892-b368-9816c38e151c\") " pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Jan 21 00:22:31 crc kubenswrapper[5118]: I0121 00:22:31.980115 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6bjd\" (UniqueName: \"kubernetes.io/projected/e352d8be-c25b-4892-b368-9816c38e151c-kube-api-access-t6bjd\") pod \"infrawatch-operators-smart-gateway-operator-bundle-nightly-head\" (UID: \"e352d8be-c25b-4892-b368-9816c38e151c\") " pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Jan 21 00:22:32 crc kubenswrapper[5118]: I0121 00:22:32.081689 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"smart-gateway-operator-catalog-configmap-partition-1-unzip\" (UniqueName: \"kubernetes.io/empty-dir/e352d8be-c25b-4892-b368-9816c38e151c-smart-gateway-operator-catalog-configmap-partition-1-unzip\") pod \"infrawatch-operators-smart-gateway-operator-bundle-nightly-head\" (UID: \"e352d8be-c25b-4892-b368-9816c38e151c\") " pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Jan 21 00:22:32 crc kubenswrapper[5118]: I0121 00:22:32.081758 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"smart-gateway-operator-catalog-configmap-partition-1-volume\" (UniqueName: \"kubernetes.io/configmap/e352d8be-c25b-4892-b368-9816c38e151c-smart-gateway-operator-catalog-configmap-partition-1-volume\") pod \"infrawatch-operators-smart-gateway-operator-bundle-nightly-head\" (UID: \"e352d8be-c25b-4892-b368-9816c38e151c\") " pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Jan 21 00:22:32 crc kubenswrapper[5118]: I0121 00:22:32.081807 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-t6bjd\" (UniqueName: \"kubernetes.io/projected/e352d8be-c25b-4892-b368-9816c38e151c-kube-api-access-t6bjd\") pod \"infrawatch-operators-smart-gateway-operator-bundle-nightly-head\" (UID: \"e352d8be-c25b-4892-b368-9816c38e151c\") " pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Jan 21 00:22:32 crc kubenswrapper[5118]: I0121 00:22:32.083179 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"smart-gateway-operator-catalog-configmap-partition-1-unzip\" (UniqueName: \"kubernetes.io/empty-dir/e352d8be-c25b-4892-b368-9816c38e151c-smart-gateway-operator-catalog-configmap-partition-1-unzip\") pod \"infrawatch-operators-smart-gateway-operator-bundle-nightly-head\" (UID: \"e352d8be-c25b-4892-b368-9816c38e151c\") " pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Jan 21 00:22:32 crc kubenswrapper[5118]: I0121 00:22:32.083519 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"smart-gateway-operator-catalog-configmap-partition-1-volume\" (UniqueName: \"kubernetes.io/configmap/e352d8be-c25b-4892-b368-9816c38e151c-smart-gateway-operator-catalog-configmap-partition-1-volume\") pod \"infrawatch-operators-smart-gateway-operator-bundle-nightly-head\" (UID: \"e352d8be-c25b-4892-b368-9816c38e151c\") " pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Jan 21 00:22:32 crc kubenswrapper[5118]: I0121 00:22:32.115447 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6bjd\" (UniqueName: \"kubernetes.io/projected/e352d8be-c25b-4892-b368-9816c38e151c-kube-api-access-t6bjd\") pod \"infrawatch-operators-smart-gateway-operator-bundle-nightly-head\" (UID: \"e352d8be-c25b-4892-b368-9816c38e151c\") " pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Jan 21 00:22:32 crc kubenswrapper[5118]: I0121 00:22:32.262480 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Jan 21 00:22:32 crc kubenswrapper[5118]: I0121 00:22:32.296321 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-7cfzv" event={"ID":"54dd7013-936e-44cb-92df-0e4ed02dd3ba","Type":"ContainerStarted","Data":"36d64d0715b1bef151b80d551bd2865c77c17a582db446c00649811b75244c71"} Jan 21 00:22:32 crc kubenswrapper[5118]: I0121 00:22:32.309099 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-7894b5b9b4-q9qmg" event={"ID":"a1ed4705-4ddb-4ee6-bce2-c8b90c8a459b","Type":"ContainerStarted","Data":"05fdd7874c1428d9c02ed12d6b71d12104293ecf786691f1b14fefe8e9932ddc"} Jan 21 00:22:32 crc kubenswrapper[5118]: I0121 00:22:32.309694 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="cert-manager/cert-manager-webhook-7894b5b9b4-q9qmg" Jan 21 00:22:32 crc kubenswrapper[5118]: I0121 00:22:32.327912 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-7cfzv" podStartSLOduration=2.563900252 podStartE2EDuration="10.327896406s" podCreationTimestamp="2026-01-21 00:22:22 +0000 UTC" firstStartedPulling="2026-01-21 00:22:24.137984723 +0000 UTC m=+799.462231741" lastFinishedPulling="2026-01-21 00:22:31.901980877 +0000 UTC m=+807.226227895" observedRunningTime="2026-01-21 00:22:32.326021506 +0000 UTC m=+807.650268544" watchObservedRunningTime="2026-01-21 00:22:32.327896406 +0000 UTC m=+807.652143424" Jan 21 00:22:32 crc kubenswrapper[5118]: I0121 00:22:32.361404 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-7894b5b9b4-q9qmg" podStartSLOduration=1.61670805 podStartE2EDuration="13.361387837s" podCreationTimestamp="2026-01-21 00:22:19 +0000 UTC" firstStartedPulling="2026-01-21 00:22:20.175846334 +0000 UTC m=+795.500093352" lastFinishedPulling="2026-01-21 00:22:31.920526111 +0000 UTC m=+807.244773139" observedRunningTime="2026-01-21 00:22:32.360197885 +0000 UTC m=+807.684444903" watchObservedRunningTime="2026-01-21 00:22:32.361387837 +0000 UTC m=+807.685634855" Jan 21 00:22:32 crc kubenswrapper[5118]: I0121 00:22:32.412409 5118 prober.go:120] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="3a83d7d0-ad82-4da4-8c10-269310b2e144" containerName="elasticsearch" probeResult="failure" output=< Jan 21 00:22:32 crc kubenswrapper[5118]: {"timestamp": "2026-01-21T00:22:32+00:00", "message": "readiness probe failed", "curl_rc": "7"} Jan 21 00:22:32 crc kubenswrapper[5118]: > Jan 21 00:22:32 crc kubenswrapper[5118]: I0121 00:22:32.752772 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head"] Jan 21 00:22:32 crc kubenswrapper[5118]: W0121 00:22:32.759752 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode352d8be_c25b_4892_b368_9816c38e151c.slice/crio-16fb7d4d66c8e9e91cca26f60655b838276eac1f134809b7299a6943e9f0e587 WatchSource:0}: Error finding container 16fb7d4d66c8e9e91cca26f60655b838276eac1f134809b7299a6943e9f0e587: Status 404 returned error can't find the container with id 16fb7d4d66c8e9e91cca26f60655b838276eac1f134809b7299a6943e9f0e587 Jan 21 00:22:32 crc kubenswrapper[5118]: I0121 00:22:32.829417 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858d87f86b-rhl9w"] Jan 21 00:22:32 crc kubenswrapper[5118]: I0121 00:22:32.836345 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858d87f86b-rhl9w" Jan 21 00:22:32 crc kubenswrapper[5118]: I0121 00:22:32.836428 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858d87f86b-rhl9w"] Jan 21 00:22:32 crc kubenswrapper[5118]: I0121 00:22:32.842479 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-dockercfg-drc4q\"" Jan 21 00:22:32 crc kubenswrapper[5118]: I0121 00:22:32.957545 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/afeab2f1-ad2a-4d1e-915e-6dbd338641e5-bound-sa-token\") pod \"cert-manager-858d87f86b-rhl9w\" (UID: \"afeab2f1-ad2a-4d1e-915e-6dbd338641e5\") " pod="cert-manager/cert-manager-858d87f86b-rhl9w" Jan 21 00:22:32 crc kubenswrapper[5118]: I0121 00:22:32.957622 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6zwv\" (UniqueName: \"kubernetes.io/projected/afeab2f1-ad2a-4d1e-915e-6dbd338641e5-kube-api-access-g6zwv\") pod \"cert-manager-858d87f86b-rhl9w\" (UID: \"afeab2f1-ad2a-4d1e-915e-6dbd338641e5\") " pod="cert-manager/cert-manager-858d87f86b-rhl9w" Jan 21 00:22:33 crc kubenswrapper[5118]: I0121 00:22:33.058873 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/afeab2f1-ad2a-4d1e-915e-6dbd338641e5-bound-sa-token\") pod \"cert-manager-858d87f86b-rhl9w\" (UID: \"afeab2f1-ad2a-4d1e-915e-6dbd338641e5\") " pod="cert-manager/cert-manager-858d87f86b-rhl9w" Jan 21 00:22:33 crc kubenswrapper[5118]: I0121 00:22:33.059054 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-g6zwv\" (UniqueName: \"kubernetes.io/projected/afeab2f1-ad2a-4d1e-915e-6dbd338641e5-kube-api-access-g6zwv\") pod \"cert-manager-858d87f86b-rhl9w\" (UID: \"afeab2f1-ad2a-4d1e-915e-6dbd338641e5\") " pod="cert-manager/cert-manager-858d87f86b-rhl9w" Jan 21 00:22:33 crc kubenswrapper[5118]: I0121 00:22:33.077639 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/afeab2f1-ad2a-4d1e-915e-6dbd338641e5-bound-sa-token\") pod \"cert-manager-858d87f86b-rhl9w\" (UID: \"afeab2f1-ad2a-4d1e-915e-6dbd338641e5\") " pod="cert-manager/cert-manager-858d87f86b-rhl9w" Jan 21 00:22:33 crc kubenswrapper[5118]: I0121 00:22:33.093067 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6zwv\" (UniqueName: \"kubernetes.io/projected/afeab2f1-ad2a-4d1e-915e-6dbd338641e5-kube-api-access-g6zwv\") pod \"cert-manager-858d87f86b-rhl9w\" (UID: \"afeab2f1-ad2a-4d1e-915e-6dbd338641e5\") " pod="cert-manager/cert-manager-858d87f86b-rhl9w" Jan 21 00:22:33 crc kubenswrapper[5118]: I0121 00:22:33.157288 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858d87f86b-rhl9w" Jan 21 00:22:33 crc kubenswrapper[5118]: I0121 00:22:33.523917 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" event={"ID":"e352d8be-c25b-4892-b368-9816c38e151c","Type":"ContainerStarted","Data":"16fb7d4d66c8e9e91cca26f60655b838276eac1f134809b7299a6943e9f0e587"} Jan 21 00:22:33 crc kubenswrapper[5118]: I0121 00:22:33.691376 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858d87f86b-rhl9w"] Jan 21 00:22:33 crc kubenswrapper[5118]: W0121 00:22:33.700352 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podafeab2f1_ad2a_4d1e_915e_6dbd338641e5.slice/crio-8f0adf7295126d2cb336415380d4c0b2d15d2bef9d1449141faadce788db214e WatchSource:0}: Error finding container 8f0adf7295126d2cb336415380d4c0b2d15d2bef9d1449141faadce788db214e: Status 404 returned error can't find the container with id 8f0adf7295126d2cb336415380d4c0b2d15d2bef9d1449141faadce788db214e Jan 21 00:22:34 crc kubenswrapper[5118]: I0121 00:22:34.530185 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858d87f86b-rhl9w" event={"ID":"afeab2f1-ad2a-4d1e-915e-6dbd338641e5","Type":"ContainerStarted","Data":"8f0adf7295126d2cb336415380d4c0b2d15d2bef9d1449141faadce788db214e"} Jan 21 00:22:35 crc kubenswrapper[5118]: I0121 00:22:35.538704 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858d87f86b-rhl9w" event={"ID":"afeab2f1-ad2a-4d1e-915e-6dbd338641e5","Type":"ContainerStarted","Data":"dd99f95fb17e75bc0df3b2aaa7e9c539d2a6a4298725094ed578dd6fbbca4f38"} Jan 21 00:22:35 crc kubenswrapper[5118]: I0121 00:22:35.557652 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858d87f86b-rhl9w" podStartSLOduration=3.557636774 podStartE2EDuration="3.557636774s" podCreationTimestamp="2026-01-21 00:22:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:22:35.552526058 +0000 UTC m=+810.876773086" watchObservedRunningTime="2026-01-21 00:22:35.557636774 +0000 UTC m=+810.881883782" Jan 21 00:22:37 crc kubenswrapper[5118]: I0121 00:22:37.325199 5118 prober.go:120] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="3a83d7d0-ad82-4da4-8c10-269310b2e144" containerName="elasticsearch" probeResult="failure" output=< Jan 21 00:22:37 crc kubenswrapper[5118]: {"timestamp": "2026-01-21T00:22:37+00:00", "message": "readiness probe failed", "curl_rc": "7"} Jan 21 00:22:37 crc kubenswrapper[5118]: > Jan 21 00:22:39 crc kubenswrapper[5118]: I0121 00:22:39.534032 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-7894b5b9b4-q9qmg" Jan 21 00:22:42 crc kubenswrapper[5118]: I0121 00:22:42.262361 5118 prober.go:120] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="3a83d7d0-ad82-4da4-8c10-269310b2e144" containerName="elasticsearch" probeResult="failure" output=< Jan 21 00:22:42 crc kubenswrapper[5118]: {"timestamp": "2026-01-21T00:22:42+00:00", "message": "readiness probe failed", "curl_rc": "7"} Jan 21 00:22:42 crc kubenswrapper[5118]: > Jan 21 00:22:47 crc kubenswrapper[5118]: I0121 00:22:47.370140 5118 prober.go:120] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="3a83d7d0-ad82-4da4-8c10-269310b2e144" containerName="elasticsearch" probeResult="failure" output=< Jan 21 00:22:47 crc kubenswrapper[5118]: {"timestamp": "2026-01-21T00:22:47+00:00", "message": "readiness probe failed", "curl_rc": "7"} Jan 21 00:22:47 crc kubenswrapper[5118]: > Jan 21 00:22:52 crc kubenswrapper[5118]: I0121 00:22:52.250807 5118 prober.go:120] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="3a83d7d0-ad82-4da4-8c10-269310b2e144" containerName="elasticsearch" probeResult="failure" output=< Jan 21 00:22:52 crc kubenswrapper[5118]: {"timestamp": "2026-01-21T00:22:52+00:00", "message": "readiness probe failed", "curl_rc": "7"} Jan 21 00:22:52 crc kubenswrapper[5118]: > Jan 21 00:22:54 crc kubenswrapper[5118]: I0121 00:22:54.803139 5118 generic.go:358] "Generic (PLEG): container finished" podID="e352d8be-c25b-4892-b368-9816c38e151c" containerID="05d66f5faf696ed9108116614fa979644ef20d1c482fbfdd8e2fa99c8eca9240" exitCode=0 Jan 21 00:22:54 crc kubenswrapper[5118]: I0121 00:22:54.803244 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" event={"ID":"e352d8be-c25b-4892-b368-9816c38e151c","Type":"ContainerDied","Data":"05d66f5faf696ed9108116614fa979644ef20d1c482fbfdd8e2fa99c8eca9240"} Jan 21 00:22:57 crc kubenswrapper[5118]: I0121 00:22:57.932026 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/elasticsearch-es-default-0" Jan 21 00:23:00 crc kubenswrapper[5118]: I0121 00:23:00.841247 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" event={"ID":"e352d8be-c25b-4892-b368-9816c38e151c","Type":"ContainerStarted","Data":"2adf85125692a14f9a8f00217b293285f96211ad037d7db49b483677d08e6808"} Jan 21 00:23:00 crc kubenswrapper[5118]: I0121 00:23:00.868088 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" podStartSLOduration=2.797450675 podStartE2EDuration="29.868067264s" podCreationTimestamp="2026-01-21 00:22:31 +0000 UTC" firstStartedPulling="2026-01-21 00:22:32.761931361 +0000 UTC m=+808.086178369" lastFinishedPulling="2026-01-21 00:22:59.83254794 +0000 UTC m=+835.156794958" observedRunningTime="2026-01-21 00:23:00.867027827 +0000 UTC m=+836.191274875" watchObservedRunningTime="2026-01-21 00:23:00.868067264 +0000 UTC m=+836.192314292" Jan 21 00:23:01 crc kubenswrapper[5118]: I0121 00:23:01.558298 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad97666124k2v"] Jan 21 00:23:01 crc kubenswrapper[5118]: I0121 00:23:01.565088 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad97666124k2v" Jan 21 00:23:01 crc kubenswrapper[5118]: I0121 00:23:01.568506 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad97666124k2v"] Jan 21 00:23:01 crc kubenswrapper[5118]: I0121 00:23:01.635625 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/856b1a14-e4ae-4518-a553-056f5d736bc8-util\") pod \"581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad97666124k2v\" (UID: \"856b1a14-e4ae-4518-a553-056f5d736bc8\") " pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad97666124k2v" Jan 21 00:23:01 crc kubenswrapper[5118]: I0121 00:23:01.635699 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bfzs\" (UniqueName: \"kubernetes.io/projected/856b1a14-e4ae-4518-a553-056f5d736bc8-kube-api-access-5bfzs\") pod \"581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad97666124k2v\" (UID: \"856b1a14-e4ae-4518-a553-056f5d736bc8\") " pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad97666124k2v" Jan 21 00:23:01 crc kubenswrapper[5118]: I0121 00:23:01.635729 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/856b1a14-e4ae-4518-a553-056f5d736bc8-bundle\") pod \"581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad97666124k2v\" (UID: \"856b1a14-e4ae-4518-a553-056f5d736bc8\") " pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad97666124k2v" Jan 21 00:23:01 crc kubenswrapper[5118]: I0121 00:23:01.737051 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/856b1a14-e4ae-4518-a553-056f5d736bc8-bundle\") pod \"581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad97666124k2v\" (UID: \"856b1a14-e4ae-4518-a553-056f5d736bc8\") " pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad97666124k2v" Jan 21 00:23:01 crc kubenswrapper[5118]: I0121 00:23:01.737287 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/856b1a14-e4ae-4518-a553-056f5d736bc8-util\") pod \"581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad97666124k2v\" (UID: \"856b1a14-e4ae-4518-a553-056f5d736bc8\") " pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad97666124k2v" Jan 21 00:23:01 crc kubenswrapper[5118]: I0121 00:23:01.737357 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5bfzs\" (UniqueName: \"kubernetes.io/projected/856b1a14-e4ae-4518-a553-056f5d736bc8-kube-api-access-5bfzs\") pod \"581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad97666124k2v\" (UID: \"856b1a14-e4ae-4518-a553-056f5d736bc8\") " pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad97666124k2v" Jan 21 00:23:01 crc kubenswrapper[5118]: I0121 00:23:01.737618 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/856b1a14-e4ae-4518-a553-056f5d736bc8-util\") pod \"581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad97666124k2v\" (UID: \"856b1a14-e4ae-4518-a553-056f5d736bc8\") " pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad97666124k2v" Jan 21 00:23:01 crc kubenswrapper[5118]: I0121 00:23:01.737782 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/856b1a14-e4ae-4518-a553-056f5d736bc8-bundle\") pod \"581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad97666124k2v\" (UID: \"856b1a14-e4ae-4518-a553-056f5d736bc8\") " pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad97666124k2v" Jan 21 00:23:01 crc kubenswrapper[5118]: I0121 00:23:01.759769 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bfzs\" (UniqueName: \"kubernetes.io/projected/856b1a14-e4ae-4518-a553-056f5d736bc8-kube-api-access-5bfzs\") pod \"581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad97666124k2v\" (UID: \"856b1a14-e4ae-4518-a553-056f5d736bc8\") " pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad97666124k2v" Jan 21 00:23:01 crc kubenswrapper[5118]: I0121 00:23:01.882832 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad97666124k2v" Jan 21 00:23:02 crc kubenswrapper[5118]: W0121 00:23:02.081322 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod856b1a14_e4ae_4518_a553_056f5d736bc8.slice/crio-9cb338bec683f715c828efa76a23e27296728cd34a08a00fb9cab21a1cf57d2e WatchSource:0}: Error finding container 9cb338bec683f715c828efa76a23e27296728cd34a08a00fb9cab21a1cf57d2e: Status 404 returned error can't find the container with id 9cb338bec683f715c828efa76a23e27296728cd34a08a00fb9cab21a1cf57d2e Jan 21 00:23:02 crc kubenswrapper[5118]: I0121 00:23:02.083358 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad97666124k2v"] Jan 21 00:23:02 crc kubenswrapper[5118]: I0121 00:23:02.858521 5118 generic.go:358] "Generic (PLEG): container finished" podID="856b1a14-e4ae-4518-a553-056f5d736bc8" containerID="c68c4c65e29f3ead22beb64e6a2dcf8da295eeddb189f3da4f844a0e575ec5c3" exitCode=0 Jan 21 00:23:02 crc kubenswrapper[5118]: I0121 00:23:02.858959 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad97666124k2v" event={"ID":"856b1a14-e4ae-4518-a553-056f5d736bc8","Type":"ContainerDied","Data":"c68c4c65e29f3ead22beb64e6a2dcf8da295eeddb189f3da4f844a0e575ec5c3"} Jan 21 00:23:02 crc kubenswrapper[5118]: I0121 00:23:02.858997 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad97666124k2v" event={"ID":"856b1a14-e4ae-4518-a553-056f5d736bc8","Type":"ContainerStarted","Data":"9cb338bec683f715c828efa76a23e27296728cd34a08a00fb9cab21a1cf57d2e"} Jan 21 00:23:05 crc kubenswrapper[5118]: I0121 00:23:05.884922 5118 generic.go:358] "Generic (PLEG): container finished" podID="856b1a14-e4ae-4518-a553-056f5d736bc8" containerID="3c4cb6251fdc97803b7d11a94de8304f943f195097301c34343d123e9dc83d65" exitCode=0 Jan 21 00:23:05 crc kubenswrapper[5118]: I0121 00:23:05.884965 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad97666124k2v" event={"ID":"856b1a14-e4ae-4518-a553-056f5d736bc8","Type":"ContainerDied","Data":"3c4cb6251fdc97803b7d11a94de8304f943f195097301c34343d123e9dc83d65"} Jan 21 00:23:06 crc kubenswrapper[5118]: I0121 00:23:06.019625 5118 scope.go:117] "RemoveContainer" containerID="80d637fd8b1b3dd2344197a9b35b41fe213fdc203deef8260a1114dc44892e7b" Jan 21 00:23:06 crc kubenswrapper[5118]: I0121 00:23:06.900072 5118 generic.go:358] "Generic (PLEG): container finished" podID="856b1a14-e4ae-4518-a553-056f5d736bc8" containerID="f9d424ced6853e48aebddc0a76b63f727529a167f4bd6cfe171e82c710c1eb4d" exitCode=0 Jan 21 00:23:06 crc kubenswrapper[5118]: I0121 00:23:06.900274 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad97666124k2v" event={"ID":"856b1a14-e4ae-4518-a553-056f5d736bc8","Type":"ContainerDied","Data":"f9d424ced6853e48aebddc0a76b63f727529a167f4bd6cfe171e82c710c1eb4d"} Jan 21 00:23:08 crc kubenswrapper[5118]: I0121 00:23:08.194703 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad97666124k2v" Jan 21 00:23:08 crc kubenswrapper[5118]: I0121 00:23:08.343608 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/856b1a14-e4ae-4518-a553-056f5d736bc8-bundle\") pod \"856b1a14-e4ae-4518-a553-056f5d736bc8\" (UID: \"856b1a14-e4ae-4518-a553-056f5d736bc8\") " Jan 21 00:23:08 crc kubenswrapper[5118]: I0121 00:23:08.343651 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5bfzs\" (UniqueName: \"kubernetes.io/projected/856b1a14-e4ae-4518-a553-056f5d736bc8-kube-api-access-5bfzs\") pod \"856b1a14-e4ae-4518-a553-056f5d736bc8\" (UID: \"856b1a14-e4ae-4518-a553-056f5d736bc8\") " Jan 21 00:23:08 crc kubenswrapper[5118]: I0121 00:23:08.343671 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/856b1a14-e4ae-4518-a553-056f5d736bc8-util\") pod \"856b1a14-e4ae-4518-a553-056f5d736bc8\" (UID: \"856b1a14-e4ae-4518-a553-056f5d736bc8\") " Jan 21 00:23:08 crc kubenswrapper[5118]: I0121 00:23:08.344884 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/856b1a14-e4ae-4518-a553-056f5d736bc8-bundle" (OuterVolumeSpecName: "bundle") pod "856b1a14-e4ae-4518-a553-056f5d736bc8" (UID: "856b1a14-e4ae-4518-a553-056f5d736bc8"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 00:23:08 crc kubenswrapper[5118]: I0121 00:23:08.352333 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/856b1a14-e4ae-4518-a553-056f5d736bc8-util" (OuterVolumeSpecName: "util") pod "856b1a14-e4ae-4518-a553-056f5d736bc8" (UID: "856b1a14-e4ae-4518-a553-056f5d736bc8"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 00:23:08 crc kubenswrapper[5118]: I0121 00:23:08.352511 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/856b1a14-e4ae-4518-a553-056f5d736bc8-kube-api-access-5bfzs" (OuterVolumeSpecName: "kube-api-access-5bfzs") pod "856b1a14-e4ae-4518-a553-056f5d736bc8" (UID: "856b1a14-e4ae-4518-a553-056f5d736bc8"). InnerVolumeSpecName "kube-api-access-5bfzs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:23:08 crc kubenswrapper[5118]: I0121 00:23:08.445411 5118 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/856b1a14-e4ae-4518-a553-056f5d736bc8-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 00:23:08 crc kubenswrapper[5118]: I0121 00:23:08.445469 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5bfzs\" (UniqueName: \"kubernetes.io/projected/856b1a14-e4ae-4518-a553-056f5d736bc8-kube-api-access-5bfzs\") on node \"crc\" DevicePath \"\"" Jan 21 00:23:08 crc kubenswrapper[5118]: I0121 00:23:08.445500 5118 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/856b1a14-e4ae-4518-a553-056f5d736bc8-util\") on node \"crc\" DevicePath \"\"" Jan 21 00:23:08 crc kubenswrapper[5118]: I0121 00:23:08.918756 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad97666124k2v" event={"ID":"856b1a14-e4ae-4518-a553-056f5d736bc8","Type":"ContainerDied","Data":"9cb338bec683f715c828efa76a23e27296728cd34a08a00fb9cab21a1cf57d2e"} Jan 21 00:23:08 crc kubenswrapper[5118]: I0121 00:23:08.918805 5118 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9cb338bec683f715c828efa76a23e27296728cd34a08a00fb9cab21a1cf57d2e" Jan 21 00:23:08 crc kubenswrapper[5118]: I0121 00:23:08.918830 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad97666124k2v" Jan 21 00:23:12 crc kubenswrapper[5118]: I0121 00:23:12.455052 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-97b85656c-nzxxg"] Jan 21 00:23:12 crc kubenswrapper[5118]: I0121 00:23:12.456359 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="856b1a14-e4ae-4518-a553-056f5d736bc8" containerName="extract" Jan 21 00:23:12 crc kubenswrapper[5118]: I0121 00:23:12.456377 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="856b1a14-e4ae-4518-a553-056f5d736bc8" containerName="extract" Jan 21 00:23:12 crc kubenswrapper[5118]: I0121 00:23:12.456409 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="856b1a14-e4ae-4518-a553-056f5d736bc8" containerName="util" Jan 21 00:23:12 crc kubenswrapper[5118]: I0121 00:23:12.456419 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="856b1a14-e4ae-4518-a553-056f5d736bc8" containerName="util" Jan 21 00:23:12 crc kubenswrapper[5118]: I0121 00:23:12.456429 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="856b1a14-e4ae-4518-a553-056f5d736bc8" containerName="pull" Jan 21 00:23:12 crc kubenswrapper[5118]: I0121 00:23:12.456439 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="856b1a14-e4ae-4518-a553-056f5d736bc8" containerName="pull" Jan 21 00:23:12 crc kubenswrapper[5118]: I0121 00:23:12.456609 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="856b1a14-e4ae-4518-a553-056f5d736bc8" containerName="extract" Jan 21 00:23:12 crc kubenswrapper[5118]: I0121 00:23:12.509248 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-97b85656c-nzxxg" Jan 21 00:23:12 crc kubenswrapper[5118]: I0121 00:23:12.512782 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-dockercfg-x6bzd\"" Jan 21 00:23:12 crc kubenswrapper[5118]: I0121 00:23:12.514531 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-97b85656c-nzxxg"] Jan 21 00:23:12 crc kubenswrapper[5118]: I0121 00:23:12.609922 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/83839c7f-0d0d-41f1-83bf-77a677ceb327-runner\") pod \"smart-gateway-operator-97b85656c-nzxxg\" (UID: \"83839c7f-0d0d-41f1-83bf-77a677ceb327\") " pod="service-telemetry/smart-gateway-operator-97b85656c-nzxxg" Jan 21 00:23:12 crc kubenswrapper[5118]: I0121 00:23:12.610332 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4b7m5\" (UniqueName: \"kubernetes.io/projected/83839c7f-0d0d-41f1-83bf-77a677ceb327-kube-api-access-4b7m5\") pod \"smart-gateway-operator-97b85656c-nzxxg\" (UID: \"83839c7f-0d0d-41f1-83bf-77a677ceb327\") " pod="service-telemetry/smart-gateway-operator-97b85656c-nzxxg" Jan 21 00:23:12 crc kubenswrapper[5118]: I0121 00:23:12.710764 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4b7m5\" (UniqueName: \"kubernetes.io/projected/83839c7f-0d0d-41f1-83bf-77a677ceb327-kube-api-access-4b7m5\") pod \"smart-gateway-operator-97b85656c-nzxxg\" (UID: \"83839c7f-0d0d-41f1-83bf-77a677ceb327\") " pod="service-telemetry/smart-gateway-operator-97b85656c-nzxxg" Jan 21 00:23:12 crc kubenswrapper[5118]: I0121 00:23:12.710832 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/83839c7f-0d0d-41f1-83bf-77a677ceb327-runner\") pod \"smart-gateway-operator-97b85656c-nzxxg\" (UID: \"83839c7f-0d0d-41f1-83bf-77a677ceb327\") " pod="service-telemetry/smart-gateway-operator-97b85656c-nzxxg" Jan 21 00:23:12 crc kubenswrapper[5118]: I0121 00:23:12.711364 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/83839c7f-0d0d-41f1-83bf-77a677ceb327-runner\") pod \"smart-gateway-operator-97b85656c-nzxxg\" (UID: \"83839c7f-0d0d-41f1-83bf-77a677ceb327\") " pod="service-telemetry/smart-gateway-operator-97b85656c-nzxxg" Jan 21 00:23:12 crc kubenswrapper[5118]: I0121 00:23:12.733930 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4b7m5\" (UniqueName: \"kubernetes.io/projected/83839c7f-0d0d-41f1-83bf-77a677ceb327-kube-api-access-4b7m5\") pod \"smart-gateway-operator-97b85656c-nzxxg\" (UID: \"83839c7f-0d0d-41f1-83bf-77a677ceb327\") " pod="service-telemetry/smart-gateway-operator-97b85656c-nzxxg" Jan 21 00:23:12 crc kubenswrapper[5118]: I0121 00:23:12.828975 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-97b85656c-nzxxg" Jan 21 00:23:13 crc kubenswrapper[5118]: I0121 00:23:13.085877 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-97b85656c-nzxxg"] Jan 21 00:23:13 crc kubenswrapper[5118]: W0121 00:23:13.093040 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod83839c7f_0d0d_41f1_83bf_77a677ceb327.slice/crio-b6f0cf214cb121fc949b8ca3cd383979e39ab9be0d547c718be2632d20659e44 WatchSource:0}: Error finding container b6f0cf214cb121fc949b8ca3cd383979e39ab9be0d547c718be2632d20659e44: Status 404 returned error can't find the container with id b6f0cf214cb121fc949b8ca3cd383979e39ab9be0d547c718be2632d20659e44 Jan 21 00:23:13 crc kubenswrapper[5118]: I0121 00:23:13.962548 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-97b85656c-nzxxg" event={"ID":"83839c7f-0d0d-41f1-83bf-77a677ceb327","Type":"ContainerStarted","Data":"b6f0cf214cb121fc949b8ca3cd383979e39ab9be0d547c718be2632d20659e44"} Jan 21 00:23:28 crc kubenswrapper[5118]: I0121 00:23:28.075746 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-97b85656c-nzxxg" event={"ID":"83839c7f-0d0d-41f1-83bf-77a677ceb327","Type":"ContainerStarted","Data":"f547260f6aa97d568bc4629ad6978f3269e255d67deb70680903d3f24d06cdeb"} Jan 21 00:23:28 crc kubenswrapper[5118]: I0121 00:23:28.097447 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-97b85656c-nzxxg" podStartSLOduration=1.868396799 podStartE2EDuration="16.097429536s" podCreationTimestamp="2026-01-21 00:23:12 +0000 UTC" firstStartedPulling="2026-01-21 00:23:13.096607481 +0000 UTC m=+848.420854499" lastFinishedPulling="2026-01-21 00:23:27.325640218 +0000 UTC m=+862.649887236" observedRunningTime="2026-01-21 00:23:28.089966908 +0000 UTC m=+863.414213936" watchObservedRunningTime="2026-01-21 00:23:28.097429536 +0000 UTC m=+863.421676554" Jan 21 00:23:33 crc kubenswrapper[5118]: I0121 00:23:33.800747 5118 patch_prober.go:28] interesting pod/machine-config-daemon-22r9n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 00:23:33 crc kubenswrapper[5118]: I0121 00:23:33.801219 5118 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 00:23:50 crc kubenswrapper[5118]: I0121 00:23:50.318482 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head"] Jan 21 00:23:50 crc kubenswrapper[5118]: I0121 00:23:50.345128 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head"] Jan 21 00:23:50 crc kubenswrapper[5118]: I0121 00:23:50.345349 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Jan 21 00:23:50 crc kubenswrapper[5118]: I0121 00:23:50.347489 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-catalog-configmap-partition-1\"" Jan 21 00:23:50 crc kubenswrapper[5118]: I0121 00:23:50.461160 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-telemetry-operator-catalog-configmap-partition-1-volume\" (UniqueName: \"kubernetes.io/configmap/e5695a91-ba6f-481a-8978-9b2cf485424e-service-telemetry-operator-catalog-configmap-partition-1-volume\") pod \"awatch-operators-service-telemetry-operator-bundle-nightly-head\" (UID: \"e5695a91-ba6f-481a-8978-9b2cf485424e\") " pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Jan 21 00:23:50 crc kubenswrapper[5118]: I0121 00:23:50.461408 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-telemetry-operator-catalog-configmap-partition-1-unzip\" (UniqueName: \"kubernetes.io/empty-dir/e5695a91-ba6f-481a-8978-9b2cf485424e-service-telemetry-operator-catalog-configmap-partition-1-unzip\") pod \"awatch-operators-service-telemetry-operator-bundle-nightly-head\" (UID: \"e5695a91-ba6f-481a-8978-9b2cf485424e\") " pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Jan 21 00:23:50 crc kubenswrapper[5118]: I0121 00:23:50.461523 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqqks\" (UniqueName: \"kubernetes.io/projected/e5695a91-ba6f-481a-8978-9b2cf485424e-kube-api-access-nqqks\") pod \"awatch-operators-service-telemetry-operator-bundle-nightly-head\" (UID: \"e5695a91-ba6f-481a-8978-9b2cf485424e\") " pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Jan 21 00:23:50 crc kubenswrapper[5118]: I0121 00:23:50.562894 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-telemetry-operator-catalog-configmap-partition-1-unzip\" (UniqueName: \"kubernetes.io/empty-dir/e5695a91-ba6f-481a-8978-9b2cf485424e-service-telemetry-operator-catalog-configmap-partition-1-unzip\") pod \"awatch-operators-service-telemetry-operator-bundle-nightly-head\" (UID: \"e5695a91-ba6f-481a-8978-9b2cf485424e\") " pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Jan 21 00:23:50 crc kubenswrapper[5118]: I0121 00:23:50.562959 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nqqks\" (UniqueName: \"kubernetes.io/projected/e5695a91-ba6f-481a-8978-9b2cf485424e-kube-api-access-nqqks\") pod \"awatch-operators-service-telemetry-operator-bundle-nightly-head\" (UID: \"e5695a91-ba6f-481a-8978-9b2cf485424e\") " pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Jan 21 00:23:50 crc kubenswrapper[5118]: I0121 00:23:50.563027 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-telemetry-operator-catalog-configmap-partition-1-volume\" (UniqueName: \"kubernetes.io/configmap/e5695a91-ba6f-481a-8978-9b2cf485424e-service-telemetry-operator-catalog-configmap-partition-1-volume\") pod \"awatch-operators-service-telemetry-operator-bundle-nightly-head\" (UID: \"e5695a91-ba6f-481a-8978-9b2cf485424e\") " pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Jan 21 00:23:50 crc kubenswrapper[5118]: I0121 00:23:50.563473 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-telemetry-operator-catalog-configmap-partition-1-unzip\" (UniqueName: \"kubernetes.io/empty-dir/e5695a91-ba6f-481a-8978-9b2cf485424e-service-telemetry-operator-catalog-configmap-partition-1-unzip\") pod \"awatch-operators-service-telemetry-operator-bundle-nightly-head\" (UID: \"e5695a91-ba6f-481a-8978-9b2cf485424e\") " pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Jan 21 00:23:50 crc kubenswrapper[5118]: I0121 00:23:50.564000 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-telemetry-operator-catalog-configmap-partition-1-volume\" (UniqueName: \"kubernetes.io/configmap/e5695a91-ba6f-481a-8978-9b2cf485424e-service-telemetry-operator-catalog-configmap-partition-1-volume\") pod \"awatch-operators-service-telemetry-operator-bundle-nightly-head\" (UID: \"e5695a91-ba6f-481a-8978-9b2cf485424e\") " pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Jan 21 00:23:50 crc kubenswrapper[5118]: I0121 00:23:50.586453 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqqks\" (UniqueName: \"kubernetes.io/projected/e5695a91-ba6f-481a-8978-9b2cf485424e-kube-api-access-nqqks\") pod \"awatch-operators-service-telemetry-operator-bundle-nightly-head\" (UID: \"e5695a91-ba6f-481a-8978-9b2cf485424e\") " pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Jan 21 00:23:50 crc kubenswrapper[5118]: I0121 00:23:50.664392 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Jan 21 00:23:51 crc kubenswrapper[5118]: I0121 00:23:51.071019 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head"] Jan 21 00:23:51 crc kubenswrapper[5118]: I0121 00:23:51.259292 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" event={"ID":"e5695a91-ba6f-481a-8978-9b2cf485424e","Type":"ContainerStarted","Data":"4ffebd369364f8ed886eb8fe59f67a1bca5269bf42f47932e68dffacb15fb286"} Jan 21 00:23:52 crc kubenswrapper[5118]: I0121 00:23:52.266769 5118 generic.go:358] "Generic (PLEG): container finished" podID="e5695a91-ba6f-481a-8978-9b2cf485424e" containerID="cdf033ad3c88599b3b938771c0095b8c7a8da7de2fcd835f4d95e4b9b57f07f4" exitCode=0 Jan 21 00:23:52 crc kubenswrapper[5118]: I0121 00:23:52.266832 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" event={"ID":"e5695a91-ba6f-481a-8978-9b2cf485424e","Type":"ContainerDied","Data":"cdf033ad3c88599b3b938771c0095b8c7a8da7de2fcd835f4d95e4b9b57f07f4"} Jan 21 00:23:53 crc kubenswrapper[5118]: I0121 00:23:53.905382 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-h8lxs"] Jan 21 00:23:54 crc kubenswrapper[5118]: I0121 00:23:54.305545 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h8lxs"] Jan 21 00:23:54 crc kubenswrapper[5118]: I0121 00:23:54.305735 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h8lxs" Jan 21 00:23:54 crc kubenswrapper[5118]: I0121 00:23:54.413764 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c01e4c5-0de7-4437-b8cc-acdba1d858d6-catalog-content\") pod \"community-operators-h8lxs\" (UID: \"9c01e4c5-0de7-4437-b8cc-acdba1d858d6\") " pod="openshift-marketplace/community-operators-h8lxs" Jan 21 00:23:54 crc kubenswrapper[5118]: I0121 00:23:54.413812 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c01e4c5-0de7-4437-b8cc-acdba1d858d6-utilities\") pod \"community-operators-h8lxs\" (UID: \"9c01e4c5-0de7-4437-b8cc-acdba1d858d6\") " pod="openshift-marketplace/community-operators-h8lxs" Jan 21 00:23:54 crc kubenswrapper[5118]: I0121 00:23:54.413851 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmgdl\" (UniqueName: \"kubernetes.io/projected/9c01e4c5-0de7-4437-b8cc-acdba1d858d6-kube-api-access-gmgdl\") pod \"community-operators-h8lxs\" (UID: \"9c01e4c5-0de7-4437-b8cc-acdba1d858d6\") " pod="openshift-marketplace/community-operators-h8lxs" Jan 21 00:23:54 crc kubenswrapper[5118]: I0121 00:23:54.515574 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c01e4c5-0de7-4437-b8cc-acdba1d858d6-utilities\") pod \"community-operators-h8lxs\" (UID: \"9c01e4c5-0de7-4437-b8cc-acdba1d858d6\") " pod="openshift-marketplace/community-operators-h8lxs" Jan 21 00:23:54 crc kubenswrapper[5118]: I0121 00:23:54.515653 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gmgdl\" (UniqueName: \"kubernetes.io/projected/9c01e4c5-0de7-4437-b8cc-acdba1d858d6-kube-api-access-gmgdl\") pod \"community-operators-h8lxs\" (UID: \"9c01e4c5-0de7-4437-b8cc-acdba1d858d6\") " pod="openshift-marketplace/community-operators-h8lxs" Jan 21 00:23:54 crc kubenswrapper[5118]: I0121 00:23:54.515777 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c01e4c5-0de7-4437-b8cc-acdba1d858d6-catalog-content\") pod \"community-operators-h8lxs\" (UID: \"9c01e4c5-0de7-4437-b8cc-acdba1d858d6\") " pod="openshift-marketplace/community-operators-h8lxs" Jan 21 00:23:54 crc kubenswrapper[5118]: I0121 00:23:54.516353 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c01e4c5-0de7-4437-b8cc-acdba1d858d6-catalog-content\") pod \"community-operators-h8lxs\" (UID: \"9c01e4c5-0de7-4437-b8cc-acdba1d858d6\") " pod="openshift-marketplace/community-operators-h8lxs" Jan 21 00:23:54 crc kubenswrapper[5118]: I0121 00:23:54.516394 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c01e4c5-0de7-4437-b8cc-acdba1d858d6-utilities\") pod \"community-operators-h8lxs\" (UID: \"9c01e4c5-0de7-4437-b8cc-acdba1d858d6\") " pod="openshift-marketplace/community-operators-h8lxs" Jan 21 00:23:54 crc kubenswrapper[5118]: I0121 00:23:54.537279 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmgdl\" (UniqueName: \"kubernetes.io/projected/9c01e4c5-0de7-4437-b8cc-acdba1d858d6-kube-api-access-gmgdl\") pod \"community-operators-h8lxs\" (UID: \"9c01e4c5-0de7-4437-b8cc-acdba1d858d6\") " pod="openshift-marketplace/community-operators-h8lxs" Jan 21 00:23:54 crc kubenswrapper[5118]: I0121 00:23:54.620960 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h8lxs" Jan 21 00:23:54 crc kubenswrapper[5118]: I0121 00:23:54.822841 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h8lxs"] Jan 21 00:23:54 crc kubenswrapper[5118]: W0121 00:23:54.827715 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9c01e4c5_0de7_4437_b8cc_acdba1d858d6.slice/crio-d3a84bc206185e6f2bb78b04c5540932066cfbd7c14c7994f8910c613565718b WatchSource:0}: Error finding container d3a84bc206185e6f2bb78b04c5540932066cfbd7c14c7994f8910c613565718b: Status 404 returned error can't find the container with id d3a84bc206185e6f2bb78b04c5540932066cfbd7c14c7994f8910c613565718b Jan 21 00:23:55 crc kubenswrapper[5118]: I0121 00:23:55.286188 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" event={"ID":"e5695a91-ba6f-481a-8978-9b2cf485424e","Type":"ContainerStarted","Data":"44d9f2635c142227f95df150f6a20e05922f896e68b552465956e5f3d0906777"} Jan 21 00:23:55 crc kubenswrapper[5118]: I0121 00:23:55.289150 5118 generic.go:358] "Generic (PLEG): container finished" podID="9c01e4c5-0de7-4437-b8cc-acdba1d858d6" containerID="f937d37174d96c23b08548338989a181e5db74f0788f77d9e73a70d9c2f1e013" exitCode=0 Jan 21 00:23:55 crc kubenswrapper[5118]: I0121 00:23:55.289192 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h8lxs" event={"ID":"9c01e4c5-0de7-4437-b8cc-acdba1d858d6","Type":"ContainerDied","Data":"f937d37174d96c23b08548338989a181e5db74f0788f77d9e73a70d9c2f1e013"} Jan 21 00:23:55 crc kubenswrapper[5118]: I0121 00:23:55.289324 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h8lxs" event={"ID":"9c01e4c5-0de7-4437-b8cc-acdba1d858d6","Type":"ContainerStarted","Data":"d3a84bc206185e6f2bb78b04c5540932066cfbd7c14c7994f8910c613565718b"} Jan 21 00:23:55 crc kubenswrapper[5118]: I0121 00:23:55.302752 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" podStartSLOduration=2.85330413 podStartE2EDuration="5.302736192s" podCreationTimestamp="2026-01-21 00:23:50 +0000 UTC" firstStartedPulling="2026-01-21 00:23:52.267704304 +0000 UTC m=+887.591951322" lastFinishedPulling="2026-01-21 00:23:54.717136366 +0000 UTC m=+890.041383384" observedRunningTime="2026-01-21 00:23:55.300708398 +0000 UTC m=+890.624955436" watchObservedRunningTime="2026-01-21 00:23:55.302736192 +0000 UTC m=+890.626983220" Jan 21 00:23:56 crc kubenswrapper[5118]: I0121 00:23:56.744147 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwtrnd"] Jan 21 00:23:56 crc kubenswrapper[5118]: I0121 00:23:56.751371 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwtrnd" Jan 21 00:23:56 crc kubenswrapper[5118]: I0121 00:23:56.754331 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-b2ccr\"" Jan 21 00:23:56 crc kubenswrapper[5118]: I0121 00:23:56.756656 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwtrnd"] Jan 21 00:23:56 crc kubenswrapper[5118]: I0121 00:23:56.847089 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/259b3b29-d29f-46b7-8808-75e572aadf9f-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwtrnd\" (UID: \"259b3b29-d29f-46b7-8808-75e572aadf9f\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwtrnd" Jan 21 00:23:56 crc kubenswrapper[5118]: I0121 00:23:56.847410 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/259b3b29-d29f-46b7-8808-75e572aadf9f-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwtrnd\" (UID: \"259b3b29-d29f-46b7-8808-75e572aadf9f\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwtrnd" Jan 21 00:23:56 crc kubenswrapper[5118]: I0121 00:23:56.847561 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvbj8\" (UniqueName: \"kubernetes.io/projected/259b3b29-d29f-46b7-8808-75e572aadf9f-kube-api-access-vvbj8\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwtrnd\" (UID: \"259b3b29-d29f-46b7-8808-75e572aadf9f\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwtrnd" Jan 21 00:23:56 crc kubenswrapper[5118]: I0121 00:23:56.948722 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/259b3b29-d29f-46b7-8808-75e572aadf9f-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwtrnd\" (UID: \"259b3b29-d29f-46b7-8808-75e572aadf9f\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwtrnd" Jan 21 00:23:56 crc kubenswrapper[5118]: I0121 00:23:56.948833 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vvbj8\" (UniqueName: \"kubernetes.io/projected/259b3b29-d29f-46b7-8808-75e572aadf9f-kube-api-access-vvbj8\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwtrnd\" (UID: \"259b3b29-d29f-46b7-8808-75e572aadf9f\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwtrnd" Jan 21 00:23:56 crc kubenswrapper[5118]: I0121 00:23:56.948870 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/259b3b29-d29f-46b7-8808-75e572aadf9f-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwtrnd\" (UID: \"259b3b29-d29f-46b7-8808-75e572aadf9f\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwtrnd" Jan 21 00:23:56 crc kubenswrapper[5118]: I0121 00:23:56.949277 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/259b3b29-d29f-46b7-8808-75e572aadf9f-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwtrnd\" (UID: \"259b3b29-d29f-46b7-8808-75e572aadf9f\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwtrnd" Jan 21 00:23:56 crc kubenswrapper[5118]: I0121 00:23:56.949417 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/259b3b29-d29f-46b7-8808-75e572aadf9f-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwtrnd\" (UID: \"259b3b29-d29f-46b7-8808-75e572aadf9f\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwtrnd" Jan 21 00:23:56 crc kubenswrapper[5118]: I0121 00:23:56.986149 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvbj8\" (UniqueName: \"kubernetes.io/projected/259b3b29-d29f-46b7-8808-75e572aadf9f-kube-api-access-vvbj8\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwtrnd\" (UID: \"259b3b29-d29f-46b7-8808-75e572aadf9f\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwtrnd" Jan 21 00:23:57 crc kubenswrapper[5118]: I0121 00:23:57.126440 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwtrnd" Jan 21 00:23:57 crc kubenswrapper[5118]: I0121 00:23:57.306718 5118 generic.go:358] "Generic (PLEG): container finished" podID="9c01e4c5-0de7-4437-b8cc-acdba1d858d6" containerID="a072137842a334349bb59648aaa3e10d82a713c0b81eed581e91b6dd54817dc6" exitCode=0 Jan 21 00:23:57 crc kubenswrapper[5118]: I0121 00:23:57.306766 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h8lxs" event={"ID":"9c01e4c5-0de7-4437-b8cc-acdba1d858d6","Type":"ContainerDied","Data":"a072137842a334349bb59648aaa3e10d82a713c0b81eed581e91b6dd54817dc6"} Jan 21 00:23:57 crc kubenswrapper[5118]: I0121 00:23:57.346552 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwtrnd"] Jan 21 00:23:57 crc kubenswrapper[5118]: I0121 00:23:57.754902 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572g2f7p"] Jan 21 00:23:58 crc kubenswrapper[5118]: I0121 00:23:58.124639 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572g2f7p"] Jan 21 00:23:58 crc kubenswrapper[5118]: I0121 00:23:58.124906 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572g2f7p" Jan 21 00:23:58 crc kubenswrapper[5118]: I0121 00:23:58.282383 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9d9a016c-6c95-45d3-83f9-297e6294957b-bundle\") pod \"59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572g2f7p\" (UID: \"9d9a016c-6c95-45d3-83f9-297e6294957b\") " pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572g2f7p" Jan 21 00:23:58 crc kubenswrapper[5118]: I0121 00:23:58.282477 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22hmc\" (UniqueName: \"kubernetes.io/projected/9d9a016c-6c95-45d3-83f9-297e6294957b-kube-api-access-22hmc\") pod \"59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572g2f7p\" (UID: \"9d9a016c-6c95-45d3-83f9-297e6294957b\") " pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572g2f7p" Jan 21 00:23:58 crc kubenswrapper[5118]: I0121 00:23:58.282505 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9d9a016c-6c95-45d3-83f9-297e6294957b-util\") pod \"59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572g2f7p\" (UID: \"9d9a016c-6c95-45d3-83f9-297e6294957b\") " pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572g2f7p" Jan 21 00:23:58 crc kubenswrapper[5118]: I0121 00:23:58.314479 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwtrnd" event={"ID":"259b3b29-d29f-46b7-8808-75e572aadf9f","Type":"ContainerStarted","Data":"498c815792ec29acb9f8105523f20f9f474ca8a9c6008110a2a45d419daff7c0"} Jan 21 00:23:58 crc kubenswrapper[5118]: I0121 00:23:58.317107 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h8lxs" event={"ID":"9c01e4c5-0de7-4437-b8cc-acdba1d858d6","Type":"ContainerStarted","Data":"54da7415c823806964d9b8360af3e2d164d9cb93d7573b64870cae0eb5032379"} Jan 21 00:23:58 crc kubenswrapper[5118]: I0121 00:23:58.383752 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9d9a016c-6c95-45d3-83f9-297e6294957b-bundle\") pod \"59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572g2f7p\" (UID: \"9d9a016c-6c95-45d3-83f9-297e6294957b\") " pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572g2f7p" Jan 21 00:23:58 crc kubenswrapper[5118]: I0121 00:23:58.383838 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-22hmc\" (UniqueName: \"kubernetes.io/projected/9d9a016c-6c95-45d3-83f9-297e6294957b-kube-api-access-22hmc\") pod \"59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572g2f7p\" (UID: \"9d9a016c-6c95-45d3-83f9-297e6294957b\") " pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572g2f7p" Jan 21 00:23:58 crc kubenswrapper[5118]: I0121 00:23:58.383881 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9d9a016c-6c95-45d3-83f9-297e6294957b-util\") pod \"59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572g2f7p\" (UID: \"9d9a016c-6c95-45d3-83f9-297e6294957b\") " pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572g2f7p" Jan 21 00:23:58 crc kubenswrapper[5118]: I0121 00:23:58.384326 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9d9a016c-6c95-45d3-83f9-297e6294957b-bundle\") pod \"59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572g2f7p\" (UID: \"9d9a016c-6c95-45d3-83f9-297e6294957b\") " pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572g2f7p" Jan 21 00:23:58 crc kubenswrapper[5118]: I0121 00:23:58.384386 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9d9a016c-6c95-45d3-83f9-297e6294957b-util\") pod \"59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572g2f7p\" (UID: \"9d9a016c-6c95-45d3-83f9-297e6294957b\") " pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572g2f7p" Jan 21 00:23:58 crc kubenswrapper[5118]: I0121 00:23:58.406576 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-22hmc\" (UniqueName: \"kubernetes.io/projected/9d9a016c-6c95-45d3-83f9-297e6294957b-kube-api-access-22hmc\") pod \"59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572g2f7p\" (UID: \"9d9a016c-6c95-45d3-83f9-297e6294957b\") " pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572g2f7p" Jan 21 00:23:58 crc kubenswrapper[5118]: I0121 00:23:58.458881 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572g2f7p" Jan 21 00:23:58 crc kubenswrapper[5118]: I0121 00:23:58.659490 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572g2f7p"] Jan 21 00:23:59 crc kubenswrapper[5118]: I0121 00:23:59.325814 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572g2f7p" event={"ID":"9d9a016c-6c95-45d3-83f9-297e6294957b","Type":"ContainerStarted","Data":"8d2c747a3cd4feb0ee9ce2b466174f402bc06702c7be23498a08e6d29a5f6bdb"} Jan 21 00:24:00 crc kubenswrapper[5118]: I0121 00:24:00.131459 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29482584-twbhm"] Jan 21 00:24:00 crc kubenswrapper[5118]: I0121 00:24:00.135752 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482584-twbhm" Jan 21 00:24:00 crc kubenswrapper[5118]: I0121 00:24:00.138568 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 00:24:00 crc kubenswrapper[5118]: I0121 00:24:00.138715 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 00:24:00 crc kubenswrapper[5118]: I0121 00:24:00.138911 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-tk7qc\"" Jan 21 00:24:00 crc kubenswrapper[5118]: I0121 00:24:00.140074 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29482584-twbhm"] Jan 21 00:24:00 crc kubenswrapper[5118]: I0121 00:24:00.207633 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngn66\" (UniqueName: \"kubernetes.io/projected/1f01e679-377c-4acc-9906-869bdf589782-kube-api-access-ngn66\") pod \"auto-csr-approver-29482584-twbhm\" (UID: \"1f01e679-377c-4acc-9906-869bdf589782\") " pod="openshift-infra/auto-csr-approver-29482584-twbhm" Jan 21 00:24:00 crc kubenswrapper[5118]: I0121 00:24:00.308894 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ngn66\" (UniqueName: \"kubernetes.io/projected/1f01e679-377c-4acc-9906-869bdf589782-kube-api-access-ngn66\") pod \"auto-csr-approver-29482584-twbhm\" (UID: \"1f01e679-377c-4acc-9906-869bdf589782\") " pod="openshift-infra/auto-csr-approver-29482584-twbhm" Jan 21 00:24:00 crc kubenswrapper[5118]: I0121 00:24:00.339371 5118 generic.go:358] "Generic (PLEG): container finished" podID="9d9a016c-6c95-45d3-83f9-297e6294957b" containerID="b397c7d5c9ffb162d254a640838c6a2f6cc8dcc7ef88b6f1152ebb9789c36005" exitCode=0 Jan 21 00:24:00 crc kubenswrapper[5118]: I0121 00:24:00.339559 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572g2f7p" event={"ID":"9d9a016c-6c95-45d3-83f9-297e6294957b","Type":"ContainerDied","Data":"b397c7d5c9ffb162d254a640838c6a2f6cc8dcc7ef88b6f1152ebb9789c36005"} Jan 21 00:24:00 crc kubenswrapper[5118]: I0121 00:24:00.341577 5118 generic.go:358] "Generic (PLEG): container finished" podID="259b3b29-d29f-46b7-8808-75e572aadf9f" containerID="bdf57760c4682f31d756de60ae163f097ffabc11808ec689ee85affc989ca421" exitCode=0 Jan 21 00:24:00 crc kubenswrapper[5118]: I0121 00:24:00.341630 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwtrnd" event={"ID":"259b3b29-d29f-46b7-8808-75e572aadf9f","Type":"ContainerDied","Data":"bdf57760c4682f31d756de60ae163f097ffabc11808ec689ee85affc989ca421"} Jan 21 00:24:00 crc kubenswrapper[5118]: I0121 00:24:00.342148 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngn66\" (UniqueName: \"kubernetes.io/projected/1f01e679-377c-4acc-9906-869bdf589782-kube-api-access-ngn66\") pod \"auto-csr-approver-29482584-twbhm\" (UID: \"1f01e679-377c-4acc-9906-869bdf589782\") " pod="openshift-infra/auto-csr-approver-29482584-twbhm" Jan 21 00:24:00 crc kubenswrapper[5118]: I0121 00:24:00.400667 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-h8lxs" podStartSLOduration=6.509783713 podStartE2EDuration="7.40064511s" podCreationTimestamp="2026-01-21 00:23:53 +0000 UTC" firstStartedPulling="2026-01-21 00:23:55.289967012 +0000 UTC m=+890.614214040" lastFinishedPulling="2026-01-21 00:23:56.180828409 +0000 UTC m=+891.505075437" observedRunningTime="2026-01-21 00:24:00.394960449 +0000 UTC m=+895.719207497" watchObservedRunningTime="2026-01-21 00:24:00.40064511 +0000 UTC m=+895.724892138" Jan 21 00:24:00 crc kubenswrapper[5118]: I0121 00:24:00.462399 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482584-twbhm" Jan 21 00:24:00 crc kubenswrapper[5118]: I0121 00:24:00.647187 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29482584-twbhm"] Jan 21 00:24:01 crc kubenswrapper[5118]: I0121 00:24:01.347887 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29482584-twbhm" event={"ID":"1f01e679-377c-4acc-9906-869bdf589782","Type":"ContainerStarted","Data":"5e852c55b91bfee1e07952a9749a564e171a9821a6649022f2d62d7ed70ea8fe"} Jan 21 00:24:03 crc kubenswrapper[5118]: I0121 00:24:03.361098 5118 generic.go:358] "Generic (PLEG): container finished" podID="1f01e679-377c-4acc-9906-869bdf589782" containerID="3ad4df8dea1459a8a03042a5fb0b2c493ab533db48b819f887d0b8cf4c6193cf" exitCode=0 Jan 21 00:24:03 crc kubenswrapper[5118]: I0121 00:24:03.361205 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29482584-twbhm" event={"ID":"1f01e679-377c-4acc-9906-869bdf589782","Type":"ContainerDied","Data":"3ad4df8dea1459a8a03042a5fb0b2c493ab533db48b819f887d0b8cf4c6193cf"} Jan 21 00:24:03 crc kubenswrapper[5118]: I0121 00:24:03.365139 5118 generic.go:358] "Generic (PLEG): container finished" podID="9d9a016c-6c95-45d3-83f9-297e6294957b" containerID="86f7fe9f0d323264c8f06d0cfd2b39e5219713aa9a5a595572e3dd62cdac9622" exitCode=0 Jan 21 00:24:03 crc kubenswrapper[5118]: I0121 00:24:03.365201 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572g2f7p" event={"ID":"9d9a016c-6c95-45d3-83f9-297e6294957b","Type":"ContainerDied","Data":"86f7fe9f0d323264c8f06d0cfd2b39e5219713aa9a5a595572e3dd62cdac9622"} Jan 21 00:24:03 crc kubenswrapper[5118]: I0121 00:24:03.367550 5118 generic.go:358] "Generic (PLEG): container finished" podID="259b3b29-d29f-46b7-8808-75e572aadf9f" containerID="31f873b92358859fc746849a287a106a43852fffabd6d755718da427dea04e0e" exitCode=0 Jan 21 00:24:03 crc kubenswrapper[5118]: I0121 00:24:03.367643 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwtrnd" event={"ID":"259b3b29-d29f-46b7-8808-75e572aadf9f","Type":"ContainerDied","Data":"31f873b92358859fc746849a287a106a43852fffabd6d755718da427dea04e0e"} Jan 21 00:24:03 crc kubenswrapper[5118]: I0121 00:24:03.801581 5118 patch_prober.go:28] interesting pod/machine-config-daemon-22r9n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 00:24:03 crc kubenswrapper[5118]: I0121 00:24:03.801655 5118 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 00:24:04 crc kubenswrapper[5118]: I0121 00:24:04.380951 5118 generic.go:358] "Generic (PLEG): container finished" podID="9d9a016c-6c95-45d3-83f9-297e6294957b" containerID="40296d0baeee8c22dd3a291a9e42d056410669978fcbd3122d6baa499b7911f1" exitCode=0 Jan 21 00:24:04 crc kubenswrapper[5118]: I0121 00:24:04.381126 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572g2f7p" event={"ID":"9d9a016c-6c95-45d3-83f9-297e6294957b","Type":"ContainerDied","Data":"40296d0baeee8c22dd3a291a9e42d056410669978fcbd3122d6baa499b7911f1"} Jan 21 00:24:04 crc kubenswrapper[5118]: I0121 00:24:04.385510 5118 generic.go:358] "Generic (PLEG): container finished" podID="259b3b29-d29f-46b7-8808-75e572aadf9f" containerID="30356275663ab77a749574d9cd9d867c985c300cb215dd2e197e08feff21572d" exitCode=0 Jan 21 00:24:04 crc kubenswrapper[5118]: I0121 00:24:04.385577 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwtrnd" event={"ID":"259b3b29-d29f-46b7-8808-75e572aadf9f","Type":"ContainerDied","Data":"30356275663ab77a749574d9cd9d867c985c300cb215dd2e197e08feff21572d"} Jan 21 00:24:04 crc kubenswrapper[5118]: I0121 00:24:04.621411 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-h8lxs" Jan 21 00:24:04 crc kubenswrapper[5118]: I0121 00:24:04.621727 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-h8lxs" Jan 21 00:24:04 crc kubenswrapper[5118]: I0121 00:24:04.622063 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482584-twbhm" Jan 21 00:24:04 crc kubenswrapper[5118]: I0121 00:24:04.667109 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-h8lxs" Jan 21 00:24:04 crc kubenswrapper[5118]: I0121 00:24:04.682017 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngn66\" (UniqueName: \"kubernetes.io/projected/1f01e679-377c-4acc-9906-869bdf589782-kube-api-access-ngn66\") pod \"1f01e679-377c-4acc-9906-869bdf589782\" (UID: \"1f01e679-377c-4acc-9906-869bdf589782\") " Jan 21 00:24:04 crc kubenswrapper[5118]: I0121 00:24:04.691002 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f01e679-377c-4acc-9906-869bdf589782-kube-api-access-ngn66" (OuterVolumeSpecName: "kube-api-access-ngn66") pod "1f01e679-377c-4acc-9906-869bdf589782" (UID: "1f01e679-377c-4acc-9906-869bdf589782"). InnerVolumeSpecName "kube-api-access-ngn66". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:24:04 crc kubenswrapper[5118]: I0121 00:24:04.784087 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ngn66\" (UniqueName: \"kubernetes.io/projected/1f01e679-377c-4acc-9906-869bdf589782-kube-api-access-ngn66\") on node \"crc\" DevicePath \"\"" Jan 21 00:24:05 crc kubenswrapper[5118]: I0121 00:24:05.267649 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qcqwq_7c0390f5-26b4-4299-958c-acac058be619/kube-multus/0.log" Jan 21 00:24:05 crc kubenswrapper[5118]: I0121 00:24:05.268619 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qcqwq_7c0390f5-26b4-4299-958c-acac058be619/kube-multus/0.log" Jan 21 00:24:05 crc kubenswrapper[5118]: I0121 00:24:05.299735 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 21 00:24:05 crc kubenswrapper[5118]: I0121 00:24:05.299798 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 21 00:24:05 crc kubenswrapper[5118]: I0121 00:24:05.395709 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482584-twbhm" Jan 21 00:24:05 crc kubenswrapper[5118]: I0121 00:24:05.395800 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29482584-twbhm" event={"ID":"1f01e679-377c-4acc-9906-869bdf589782","Type":"ContainerDied","Data":"5e852c55b91bfee1e07952a9749a564e171a9821a6649022f2d62d7ed70ea8fe"} Jan 21 00:24:05 crc kubenswrapper[5118]: I0121 00:24:05.395866 5118 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e852c55b91bfee1e07952a9749a564e171a9821a6649022f2d62d7ed70ea8fe" Jan 21 00:24:05 crc kubenswrapper[5118]: I0121 00:24:05.436733 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-h8lxs" Jan 21 00:24:05 crc kubenswrapper[5118]: I0121 00:24:05.685909 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29482578-ts7d8"] Jan 21 00:24:05 crc kubenswrapper[5118]: I0121 00:24:05.692058 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29482578-ts7d8"] Jan 21 00:24:05 crc kubenswrapper[5118]: I0121 00:24:05.712885 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwtrnd" Jan 21 00:24:05 crc kubenswrapper[5118]: I0121 00:24:05.717717 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572g2f7p" Jan 21 00:24:05 crc kubenswrapper[5118]: I0121 00:24:05.820593 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/259b3b29-d29f-46b7-8808-75e572aadf9f-util\") pod \"259b3b29-d29f-46b7-8808-75e572aadf9f\" (UID: \"259b3b29-d29f-46b7-8808-75e572aadf9f\") " Jan 21 00:24:05 crc kubenswrapper[5118]: I0121 00:24:05.820764 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/259b3b29-d29f-46b7-8808-75e572aadf9f-bundle\") pod \"259b3b29-d29f-46b7-8808-75e572aadf9f\" (UID: \"259b3b29-d29f-46b7-8808-75e572aadf9f\") " Jan 21 00:24:05 crc kubenswrapper[5118]: I0121 00:24:05.820824 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vvbj8\" (UniqueName: \"kubernetes.io/projected/259b3b29-d29f-46b7-8808-75e572aadf9f-kube-api-access-vvbj8\") pod \"259b3b29-d29f-46b7-8808-75e572aadf9f\" (UID: \"259b3b29-d29f-46b7-8808-75e572aadf9f\") " Jan 21 00:24:05 crc kubenswrapper[5118]: I0121 00:24:05.820846 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9d9a016c-6c95-45d3-83f9-297e6294957b-util\") pod \"9d9a016c-6c95-45d3-83f9-297e6294957b\" (UID: \"9d9a016c-6c95-45d3-83f9-297e6294957b\") " Jan 21 00:24:05 crc kubenswrapper[5118]: I0121 00:24:05.820912 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-22hmc\" (UniqueName: \"kubernetes.io/projected/9d9a016c-6c95-45d3-83f9-297e6294957b-kube-api-access-22hmc\") pod \"9d9a016c-6c95-45d3-83f9-297e6294957b\" (UID: \"9d9a016c-6c95-45d3-83f9-297e6294957b\") " Jan 21 00:24:05 crc kubenswrapper[5118]: I0121 00:24:05.820949 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9d9a016c-6c95-45d3-83f9-297e6294957b-bundle\") pod \"9d9a016c-6c95-45d3-83f9-297e6294957b\" (UID: \"9d9a016c-6c95-45d3-83f9-297e6294957b\") " Jan 21 00:24:05 crc kubenswrapper[5118]: I0121 00:24:05.822022 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d9a016c-6c95-45d3-83f9-297e6294957b-bundle" (OuterVolumeSpecName: "bundle") pod "9d9a016c-6c95-45d3-83f9-297e6294957b" (UID: "9d9a016c-6c95-45d3-83f9-297e6294957b"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 00:24:05 crc kubenswrapper[5118]: I0121 00:24:05.822439 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/259b3b29-d29f-46b7-8808-75e572aadf9f-bundle" (OuterVolumeSpecName: "bundle") pod "259b3b29-d29f-46b7-8808-75e572aadf9f" (UID: "259b3b29-d29f-46b7-8808-75e572aadf9f"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 00:24:05 crc kubenswrapper[5118]: I0121 00:24:05.826764 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d9a016c-6c95-45d3-83f9-297e6294957b-kube-api-access-22hmc" (OuterVolumeSpecName: "kube-api-access-22hmc") pod "9d9a016c-6c95-45d3-83f9-297e6294957b" (UID: "9d9a016c-6c95-45d3-83f9-297e6294957b"). InnerVolumeSpecName "kube-api-access-22hmc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:24:05 crc kubenswrapper[5118]: I0121 00:24:05.828975 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/259b3b29-d29f-46b7-8808-75e572aadf9f-util" (OuterVolumeSpecName: "util") pod "259b3b29-d29f-46b7-8808-75e572aadf9f" (UID: "259b3b29-d29f-46b7-8808-75e572aadf9f"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 00:24:05 crc kubenswrapper[5118]: I0121 00:24:05.830015 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d9a016c-6c95-45d3-83f9-297e6294957b-util" (OuterVolumeSpecName: "util") pod "9d9a016c-6c95-45d3-83f9-297e6294957b" (UID: "9d9a016c-6c95-45d3-83f9-297e6294957b"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 00:24:05 crc kubenswrapper[5118]: I0121 00:24:05.831496 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/259b3b29-d29f-46b7-8808-75e572aadf9f-kube-api-access-vvbj8" (OuterVolumeSpecName: "kube-api-access-vvbj8") pod "259b3b29-d29f-46b7-8808-75e572aadf9f" (UID: "259b3b29-d29f-46b7-8808-75e572aadf9f"). InnerVolumeSpecName "kube-api-access-vvbj8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:24:05 crc kubenswrapper[5118]: I0121 00:24:05.922906 5118 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/259b3b29-d29f-46b7-8808-75e572aadf9f-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 00:24:05 crc kubenswrapper[5118]: I0121 00:24:05.922958 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vvbj8\" (UniqueName: \"kubernetes.io/projected/259b3b29-d29f-46b7-8808-75e572aadf9f-kube-api-access-vvbj8\") on node \"crc\" DevicePath \"\"" Jan 21 00:24:05 crc kubenswrapper[5118]: I0121 00:24:05.922980 5118 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9d9a016c-6c95-45d3-83f9-297e6294957b-util\") on node \"crc\" DevicePath \"\"" Jan 21 00:24:05 crc kubenswrapper[5118]: I0121 00:24:05.923001 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-22hmc\" (UniqueName: \"kubernetes.io/projected/9d9a016c-6c95-45d3-83f9-297e6294957b-kube-api-access-22hmc\") on node \"crc\" DevicePath \"\"" Jan 21 00:24:05 crc kubenswrapper[5118]: I0121 00:24:05.923019 5118 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9d9a016c-6c95-45d3-83f9-297e6294957b-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 00:24:05 crc kubenswrapper[5118]: I0121 00:24:05.923036 5118 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/259b3b29-d29f-46b7-8808-75e572aadf9f-util\") on node \"crc\" DevicePath \"\"" Jan 21 00:24:06 crc kubenswrapper[5118]: I0121 00:24:06.406606 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572g2f7p" event={"ID":"9d9a016c-6c95-45d3-83f9-297e6294957b","Type":"ContainerDied","Data":"8d2c747a3cd4feb0ee9ce2b466174f402bc06702c7be23498a08e6d29a5f6bdb"} Jan 21 00:24:06 crc kubenswrapper[5118]: I0121 00:24:06.406655 5118 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d2c747a3cd4feb0ee9ce2b466174f402bc06702c7be23498a08e6d29a5f6bdb" Jan 21 00:24:06 crc kubenswrapper[5118]: I0121 00:24:06.406731 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572g2f7p" Jan 21 00:24:06 crc kubenswrapper[5118]: I0121 00:24:06.409701 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwtrnd" Jan 21 00:24:06 crc kubenswrapper[5118]: I0121 00:24:06.409691 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwtrnd" event={"ID":"259b3b29-d29f-46b7-8808-75e572aadf9f","Type":"ContainerDied","Data":"498c815792ec29acb9f8105523f20f9f474ca8a9c6008110a2a45d419daff7c0"} Jan 21 00:24:06 crc kubenswrapper[5118]: I0121 00:24:06.409866 5118 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="498c815792ec29acb9f8105523f20f9f474ca8a9c6008110a2a45d419daff7c0" Jan 21 00:24:06 crc kubenswrapper[5118]: I0121 00:24:06.983747 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc699375-1fce-467a-a767-ec49bc9bf989" path="/var/lib/kubelet/pods/bc699375-1fce-467a-a767-ec49bc9bf989/volumes" Jan 21 00:24:09 crc kubenswrapper[5118]: I0121 00:24:09.298455 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qhtlm"] Jan 21 00:24:09 crc kubenswrapper[5118]: I0121 00:24:09.299658 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1f01e679-377c-4acc-9906-869bdf589782" containerName="oc" Jan 21 00:24:09 crc kubenswrapper[5118]: I0121 00:24:09.299672 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f01e679-377c-4acc-9906-869bdf589782" containerName="oc" Jan 21 00:24:09 crc kubenswrapper[5118]: I0121 00:24:09.299694 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="259b3b29-d29f-46b7-8808-75e572aadf9f" containerName="util" Jan 21 00:24:09 crc kubenswrapper[5118]: I0121 00:24:09.299699 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="259b3b29-d29f-46b7-8808-75e572aadf9f" containerName="util" Jan 21 00:24:09 crc kubenswrapper[5118]: I0121 00:24:09.299715 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9d9a016c-6c95-45d3-83f9-297e6294957b" containerName="util" Jan 21 00:24:09 crc kubenswrapper[5118]: I0121 00:24:09.299721 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d9a016c-6c95-45d3-83f9-297e6294957b" containerName="util" Jan 21 00:24:09 crc kubenswrapper[5118]: I0121 00:24:09.299729 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="259b3b29-d29f-46b7-8808-75e572aadf9f" containerName="extract" Jan 21 00:24:09 crc kubenswrapper[5118]: I0121 00:24:09.299734 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="259b3b29-d29f-46b7-8808-75e572aadf9f" containerName="extract" Jan 21 00:24:09 crc kubenswrapper[5118]: I0121 00:24:09.299745 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="259b3b29-d29f-46b7-8808-75e572aadf9f" containerName="pull" Jan 21 00:24:09 crc kubenswrapper[5118]: I0121 00:24:09.299750 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="259b3b29-d29f-46b7-8808-75e572aadf9f" containerName="pull" Jan 21 00:24:09 crc kubenswrapper[5118]: I0121 00:24:09.299762 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9d9a016c-6c95-45d3-83f9-297e6294957b" containerName="pull" Jan 21 00:24:09 crc kubenswrapper[5118]: I0121 00:24:09.299767 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d9a016c-6c95-45d3-83f9-297e6294957b" containerName="pull" Jan 21 00:24:09 crc kubenswrapper[5118]: I0121 00:24:09.299773 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9d9a016c-6c95-45d3-83f9-297e6294957b" containerName="extract" Jan 21 00:24:09 crc kubenswrapper[5118]: I0121 00:24:09.299778 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d9a016c-6c95-45d3-83f9-297e6294957b" containerName="extract" Jan 21 00:24:09 crc kubenswrapper[5118]: I0121 00:24:09.299873 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="9d9a016c-6c95-45d3-83f9-297e6294957b" containerName="extract" Jan 21 00:24:09 crc kubenswrapper[5118]: I0121 00:24:09.299883 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="1f01e679-377c-4acc-9906-869bdf589782" containerName="oc" Jan 21 00:24:09 crc kubenswrapper[5118]: I0121 00:24:09.299893 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="259b3b29-d29f-46b7-8808-75e572aadf9f" containerName="extract" Jan 21 00:24:09 crc kubenswrapper[5118]: I0121 00:24:09.813373 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qhtlm"] Jan 21 00:24:09 crc kubenswrapper[5118]: I0121 00:24:09.813575 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qhtlm" Jan 21 00:24:09 crc kubenswrapper[5118]: I0121 00:24:09.980987 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5952abc-8250-4b8b-a5a1-ef89042a3e91-utilities\") pod \"certified-operators-qhtlm\" (UID: \"e5952abc-8250-4b8b-a5a1-ef89042a3e91\") " pod="openshift-marketplace/certified-operators-qhtlm" Jan 21 00:24:09 crc kubenswrapper[5118]: I0121 00:24:09.981214 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnmgw\" (UniqueName: \"kubernetes.io/projected/e5952abc-8250-4b8b-a5a1-ef89042a3e91-kube-api-access-fnmgw\") pod \"certified-operators-qhtlm\" (UID: \"e5952abc-8250-4b8b-a5a1-ef89042a3e91\") " pod="openshift-marketplace/certified-operators-qhtlm" Jan 21 00:24:09 crc kubenswrapper[5118]: I0121 00:24:09.981481 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5952abc-8250-4b8b-a5a1-ef89042a3e91-catalog-content\") pod \"certified-operators-qhtlm\" (UID: \"e5952abc-8250-4b8b-a5a1-ef89042a3e91\") " pod="openshift-marketplace/certified-operators-qhtlm" Jan 21 00:24:10 crc kubenswrapper[5118]: I0121 00:24:10.083000 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5952abc-8250-4b8b-a5a1-ef89042a3e91-utilities\") pod \"certified-operators-qhtlm\" (UID: \"e5952abc-8250-4b8b-a5a1-ef89042a3e91\") " pod="openshift-marketplace/certified-operators-qhtlm" Jan 21 00:24:10 crc kubenswrapper[5118]: I0121 00:24:10.083447 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fnmgw\" (UniqueName: \"kubernetes.io/projected/e5952abc-8250-4b8b-a5a1-ef89042a3e91-kube-api-access-fnmgw\") pod \"certified-operators-qhtlm\" (UID: \"e5952abc-8250-4b8b-a5a1-ef89042a3e91\") " pod="openshift-marketplace/certified-operators-qhtlm" Jan 21 00:24:10 crc kubenswrapper[5118]: I0121 00:24:10.083540 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5952abc-8250-4b8b-a5a1-ef89042a3e91-utilities\") pod \"certified-operators-qhtlm\" (UID: \"e5952abc-8250-4b8b-a5a1-ef89042a3e91\") " pod="openshift-marketplace/certified-operators-qhtlm" Jan 21 00:24:10 crc kubenswrapper[5118]: I0121 00:24:10.083637 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5952abc-8250-4b8b-a5a1-ef89042a3e91-catalog-content\") pod \"certified-operators-qhtlm\" (UID: \"e5952abc-8250-4b8b-a5a1-ef89042a3e91\") " pod="openshift-marketplace/certified-operators-qhtlm" Jan 21 00:24:10 crc kubenswrapper[5118]: I0121 00:24:10.084103 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5952abc-8250-4b8b-a5a1-ef89042a3e91-catalog-content\") pod \"certified-operators-qhtlm\" (UID: \"e5952abc-8250-4b8b-a5a1-ef89042a3e91\") " pod="openshift-marketplace/certified-operators-qhtlm" Jan 21 00:24:10 crc kubenswrapper[5118]: I0121 00:24:10.110500 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnmgw\" (UniqueName: \"kubernetes.io/projected/e5952abc-8250-4b8b-a5a1-ef89042a3e91-kube-api-access-fnmgw\") pod \"certified-operators-qhtlm\" (UID: \"e5952abc-8250-4b8b-a5a1-ef89042a3e91\") " pod="openshift-marketplace/certified-operators-qhtlm" Jan 21 00:24:10 crc kubenswrapper[5118]: I0121 00:24:10.163657 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qhtlm" Jan 21 00:24:10 crc kubenswrapper[5118]: I0121 00:24:10.630740 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qhtlm"] Jan 21 00:24:10 crc kubenswrapper[5118]: W0121 00:24:10.646311 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode5952abc_8250_4b8b_a5a1_ef89042a3e91.slice/crio-597d571d05261346a3924151794daff92b01a195bfd1fd237fa3cdbfa7f78f88 WatchSource:0}: Error finding container 597d571d05261346a3924151794daff92b01a195bfd1fd237fa3cdbfa7f78f88: Status 404 returned error can't find the container with id 597d571d05261346a3924151794daff92b01a195bfd1fd237fa3cdbfa7f78f88 Jan 21 00:24:11 crc kubenswrapper[5118]: I0121 00:24:11.441379 5118 generic.go:358] "Generic (PLEG): container finished" podID="e5952abc-8250-4b8b-a5a1-ef89042a3e91" containerID="ca3caf226f3a91200b99ca626b94d33a0fd1aae0e02338740a304a34ed3e85c7" exitCode=0 Jan 21 00:24:11 crc kubenswrapper[5118]: I0121 00:24:11.441438 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qhtlm" event={"ID":"e5952abc-8250-4b8b-a5a1-ef89042a3e91","Type":"ContainerDied","Data":"ca3caf226f3a91200b99ca626b94d33a0fd1aae0e02338740a304a34ed3e85c7"} Jan 21 00:24:11 crc kubenswrapper[5118]: I0121 00:24:11.441798 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qhtlm" event={"ID":"e5952abc-8250-4b8b-a5a1-ef89042a3e91","Type":"ContainerStarted","Data":"597d571d05261346a3924151794daff92b01a195bfd1fd237fa3cdbfa7f78f88"} Jan 21 00:24:12 crc kubenswrapper[5118]: I0121 00:24:12.956536 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-9tpjv"] Jan 21 00:24:12 crc kubenswrapper[5118]: I0121 00:24:12.966942 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-78b9bd8798-9tpjv" Jan 21 00:24:12 crc kubenswrapper[5118]: I0121 00:24:12.969267 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"interconnect-operator-dockercfg-ct6kt\"" Jan 21 00:24:12 crc kubenswrapper[5118]: I0121 00:24:12.970594 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-9tpjv"] Jan 21 00:24:13 crc kubenswrapper[5118]: I0121 00:24:13.031743 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zqdh\" (UniqueName: \"kubernetes.io/projected/2f613dd9-bed2-40a6-aabc-5fa37c0dbbb2-kube-api-access-7zqdh\") pod \"interconnect-operator-78b9bd8798-9tpjv\" (UID: \"2f613dd9-bed2-40a6-aabc-5fa37c0dbbb2\") " pod="service-telemetry/interconnect-operator-78b9bd8798-9tpjv" Jan 21 00:24:13 crc kubenswrapper[5118]: I0121 00:24:13.133351 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7zqdh\" (UniqueName: \"kubernetes.io/projected/2f613dd9-bed2-40a6-aabc-5fa37c0dbbb2-kube-api-access-7zqdh\") pod \"interconnect-operator-78b9bd8798-9tpjv\" (UID: \"2f613dd9-bed2-40a6-aabc-5fa37c0dbbb2\") " pod="service-telemetry/interconnect-operator-78b9bd8798-9tpjv" Jan 21 00:24:13 crc kubenswrapper[5118]: I0121 00:24:13.154576 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zqdh\" (UniqueName: \"kubernetes.io/projected/2f613dd9-bed2-40a6-aabc-5fa37c0dbbb2-kube-api-access-7zqdh\") pod \"interconnect-operator-78b9bd8798-9tpjv\" (UID: \"2f613dd9-bed2-40a6-aabc-5fa37c0dbbb2\") " pod="service-telemetry/interconnect-operator-78b9bd8798-9tpjv" Jan 21 00:24:13 crc kubenswrapper[5118]: I0121 00:24:13.282211 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-78b9bd8798-9tpjv" Jan 21 00:24:13 crc kubenswrapper[5118]: I0121 00:24:13.301287 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-h8lxs"] Jan 21 00:24:13 crc kubenswrapper[5118]: I0121 00:24:13.301566 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-h8lxs" podUID="9c01e4c5-0de7-4437-b8cc-acdba1d858d6" containerName="registry-server" containerID="cri-o://54da7415c823806964d9b8360af3e2d164d9cb93d7573b64870cae0eb5032379" gracePeriod=2 Jan 21 00:24:13 crc kubenswrapper[5118]: I0121 00:24:13.471171 5118 generic.go:358] "Generic (PLEG): container finished" podID="9c01e4c5-0de7-4437-b8cc-acdba1d858d6" containerID="54da7415c823806964d9b8360af3e2d164d9cb93d7573b64870cae0eb5032379" exitCode=0 Jan 21 00:24:13 crc kubenswrapper[5118]: I0121 00:24:13.471201 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h8lxs" event={"ID":"9c01e4c5-0de7-4437-b8cc-acdba1d858d6","Type":"ContainerDied","Data":"54da7415c823806964d9b8360af3e2d164d9cb93d7573b64870cae0eb5032379"} Jan 21 00:24:13 crc kubenswrapper[5118]: I0121 00:24:13.473636 5118 generic.go:358] "Generic (PLEG): container finished" podID="e5952abc-8250-4b8b-a5a1-ef89042a3e91" containerID="acffaae12e492969a687ceedb077f3a9805eecbfb83152784b5ebf5ab7f3eab5" exitCode=0 Jan 21 00:24:13 crc kubenswrapper[5118]: I0121 00:24:13.473731 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qhtlm" event={"ID":"e5952abc-8250-4b8b-a5a1-ef89042a3e91","Type":"ContainerDied","Data":"acffaae12e492969a687ceedb077f3a9805eecbfb83152784b5ebf5ab7f3eab5"} Jan 21 00:24:13 crc kubenswrapper[5118]: I0121 00:24:13.520150 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-9tpjv"] Jan 21 00:24:13 crc kubenswrapper[5118]: I0121 00:24:13.682430 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h8lxs" Jan 21 00:24:13 crc kubenswrapper[5118]: I0121 00:24:13.757689 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gmgdl\" (UniqueName: \"kubernetes.io/projected/9c01e4c5-0de7-4437-b8cc-acdba1d858d6-kube-api-access-gmgdl\") pod \"9c01e4c5-0de7-4437-b8cc-acdba1d858d6\" (UID: \"9c01e4c5-0de7-4437-b8cc-acdba1d858d6\") " Jan 21 00:24:13 crc kubenswrapper[5118]: I0121 00:24:13.757812 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c01e4c5-0de7-4437-b8cc-acdba1d858d6-catalog-content\") pod \"9c01e4c5-0de7-4437-b8cc-acdba1d858d6\" (UID: \"9c01e4c5-0de7-4437-b8cc-acdba1d858d6\") " Jan 21 00:24:13 crc kubenswrapper[5118]: I0121 00:24:13.757962 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c01e4c5-0de7-4437-b8cc-acdba1d858d6-utilities\") pod \"9c01e4c5-0de7-4437-b8cc-acdba1d858d6\" (UID: \"9c01e4c5-0de7-4437-b8cc-acdba1d858d6\") " Jan 21 00:24:13 crc kubenswrapper[5118]: I0121 00:24:13.759042 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c01e4c5-0de7-4437-b8cc-acdba1d858d6-utilities" (OuterVolumeSpecName: "utilities") pod "9c01e4c5-0de7-4437-b8cc-acdba1d858d6" (UID: "9c01e4c5-0de7-4437-b8cc-acdba1d858d6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 00:24:13 crc kubenswrapper[5118]: I0121 00:24:13.769372 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c01e4c5-0de7-4437-b8cc-acdba1d858d6-kube-api-access-gmgdl" (OuterVolumeSpecName: "kube-api-access-gmgdl") pod "9c01e4c5-0de7-4437-b8cc-acdba1d858d6" (UID: "9c01e4c5-0de7-4437-b8cc-acdba1d858d6"). InnerVolumeSpecName "kube-api-access-gmgdl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:24:13 crc kubenswrapper[5118]: I0121 00:24:13.807239 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c01e4c5-0de7-4437-b8cc-acdba1d858d6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9c01e4c5-0de7-4437-b8cc-acdba1d858d6" (UID: "9c01e4c5-0de7-4437-b8cc-acdba1d858d6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 00:24:13 crc kubenswrapper[5118]: I0121 00:24:13.859486 5118 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c01e4c5-0de7-4437-b8cc-acdba1d858d6-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 00:24:13 crc kubenswrapper[5118]: I0121 00:24:13.859542 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gmgdl\" (UniqueName: \"kubernetes.io/projected/9c01e4c5-0de7-4437-b8cc-acdba1d858d6-kube-api-access-gmgdl\") on node \"crc\" DevicePath \"\"" Jan 21 00:24:13 crc kubenswrapper[5118]: I0121 00:24:13.859567 5118 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c01e4c5-0de7-4437-b8cc-acdba1d858d6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 00:24:14 crc kubenswrapper[5118]: I0121 00:24:14.482699 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h8lxs" Jan 21 00:24:14 crc kubenswrapper[5118]: I0121 00:24:14.482703 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h8lxs" event={"ID":"9c01e4c5-0de7-4437-b8cc-acdba1d858d6","Type":"ContainerDied","Data":"d3a84bc206185e6f2bb78b04c5540932066cfbd7c14c7994f8910c613565718b"} Jan 21 00:24:14 crc kubenswrapper[5118]: I0121 00:24:14.483141 5118 scope.go:117] "RemoveContainer" containerID="54da7415c823806964d9b8360af3e2d164d9cb93d7573b64870cae0eb5032379" Jan 21 00:24:14 crc kubenswrapper[5118]: I0121 00:24:14.485825 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qhtlm" event={"ID":"e5952abc-8250-4b8b-a5a1-ef89042a3e91","Type":"ContainerStarted","Data":"9a28e3fae4b2a86054134f4b28ad639003aee4d785a38588567fb906a493b29b"} Jan 21 00:24:14 crc kubenswrapper[5118]: I0121 00:24:14.487090 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-78b9bd8798-9tpjv" event={"ID":"2f613dd9-bed2-40a6-aabc-5fa37c0dbbb2","Type":"ContainerStarted","Data":"74b6f72e84a267afc187657572e8411992eb89fa892307d04595c3439b120769"} Jan 21 00:24:14 crc kubenswrapper[5118]: I0121 00:24:14.501541 5118 scope.go:117] "RemoveContainer" containerID="a072137842a334349bb59648aaa3e10d82a713c0b81eed581e91b6dd54817dc6" Jan 21 00:24:14 crc kubenswrapper[5118]: I0121 00:24:14.522849 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qhtlm" podStartSLOduration=4.593364112 podStartE2EDuration="5.522828595s" podCreationTimestamp="2026-01-21 00:24:09 +0000 UTC" firstStartedPulling="2026-01-21 00:24:11.442341458 +0000 UTC m=+906.766588476" lastFinishedPulling="2026-01-21 00:24:12.371805921 +0000 UTC m=+907.696052959" observedRunningTime="2026-01-21 00:24:14.511786081 +0000 UTC m=+909.836033119" watchObservedRunningTime="2026-01-21 00:24:14.522828595 +0000 UTC m=+909.847075613" Jan 21 00:24:14 crc kubenswrapper[5118]: I0121 00:24:14.529154 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-h8lxs"] Jan 21 00:24:14 crc kubenswrapper[5118]: I0121 00:24:14.534398 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-h8lxs"] Jan 21 00:24:14 crc kubenswrapper[5118]: I0121 00:24:14.541225 5118 scope.go:117] "RemoveContainer" containerID="f937d37174d96c23b08548338989a181e5db74f0788f77d9e73a70d9c2f1e013" Jan 21 00:24:14 crc kubenswrapper[5118]: I0121 00:24:14.984378 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c01e4c5-0de7-4437-b8cc-acdba1d858d6" path="/var/lib/kubelet/pods/9c01e4c5-0de7-4437-b8cc-acdba1d858d6/volumes" Jan 21 00:24:15 crc kubenswrapper[5118]: I0121 00:24:15.356635 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-794b5697c7-z99rt"] Jan 21 00:24:15 crc kubenswrapper[5118]: I0121 00:24:15.357276 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9c01e4c5-0de7-4437-b8cc-acdba1d858d6" containerName="extract-utilities" Jan 21 00:24:15 crc kubenswrapper[5118]: I0121 00:24:15.357292 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c01e4c5-0de7-4437-b8cc-acdba1d858d6" containerName="extract-utilities" Jan 21 00:24:15 crc kubenswrapper[5118]: I0121 00:24:15.357313 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9c01e4c5-0de7-4437-b8cc-acdba1d858d6" containerName="extract-content" Jan 21 00:24:15 crc kubenswrapper[5118]: I0121 00:24:15.357318 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c01e4c5-0de7-4437-b8cc-acdba1d858d6" containerName="extract-content" Jan 21 00:24:15 crc kubenswrapper[5118]: I0121 00:24:15.357332 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9c01e4c5-0de7-4437-b8cc-acdba1d858d6" containerName="registry-server" Jan 21 00:24:15 crc kubenswrapper[5118]: I0121 00:24:15.357337 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c01e4c5-0de7-4437-b8cc-acdba1d858d6" containerName="registry-server" Jan 21 00:24:15 crc kubenswrapper[5118]: I0121 00:24:15.357451 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="9c01e4c5-0de7-4437-b8cc-acdba1d858d6" containerName="registry-server" Jan 21 00:24:15 crc kubenswrapper[5118]: I0121 00:24:15.364420 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-794b5697c7-z99rt" Jan 21 00:24:15 crc kubenswrapper[5118]: I0121 00:24:15.365999 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-dockercfg-q5h8s\"" Jan 21 00:24:15 crc kubenswrapper[5118]: I0121 00:24:15.371186 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-794b5697c7-z99rt"] Jan 21 00:24:15 crc kubenswrapper[5118]: I0121 00:24:15.477667 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tqqf\" (UniqueName: \"kubernetes.io/projected/a691b713-173e-4931-85e9-1510e1a0ee6a-kube-api-access-5tqqf\") pod \"service-telemetry-operator-794b5697c7-z99rt\" (UID: \"a691b713-173e-4931-85e9-1510e1a0ee6a\") " pod="service-telemetry/service-telemetry-operator-794b5697c7-z99rt" Jan 21 00:24:15 crc kubenswrapper[5118]: I0121 00:24:15.477740 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/a691b713-173e-4931-85e9-1510e1a0ee6a-runner\") pod \"service-telemetry-operator-794b5697c7-z99rt\" (UID: \"a691b713-173e-4931-85e9-1510e1a0ee6a\") " pod="service-telemetry/service-telemetry-operator-794b5697c7-z99rt" Jan 21 00:24:15 crc kubenswrapper[5118]: I0121 00:24:15.578797 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5tqqf\" (UniqueName: \"kubernetes.io/projected/a691b713-173e-4931-85e9-1510e1a0ee6a-kube-api-access-5tqqf\") pod \"service-telemetry-operator-794b5697c7-z99rt\" (UID: \"a691b713-173e-4931-85e9-1510e1a0ee6a\") " pod="service-telemetry/service-telemetry-operator-794b5697c7-z99rt" Jan 21 00:24:15 crc kubenswrapper[5118]: I0121 00:24:15.578855 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/a691b713-173e-4931-85e9-1510e1a0ee6a-runner\") pod \"service-telemetry-operator-794b5697c7-z99rt\" (UID: \"a691b713-173e-4931-85e9-1510e1a0ee6a\") " pod="service-telemetry/service-telemetry-operator-794b5697c7-z99rt" Jan 21 00:24:15 crc kubenswrapper[5118]: I0121 00:24:15.579457 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/a691b713-173e-4931-85e9-1510e1a0ee6a-runner\") pod \"service-telemetry-operator-794b5697c7-z99rt\" (UID: \"a691b713-173e-4931-85e9-1510e1a0ee6a\") " pod="service-telemetry/service-telemetry-operator-794b5697c7-z99rt" Jan 21 00:24:15 crc kubenswrapper[5118]: I0121 00:24:15.618338 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tqqf\" (UniqueName: \"kubernetes.io/projected/a691b713-173e-4931-85e9-1510e1a0ee6a-kube-api-access-5tqqf\") pod \"service-telemetry-operator-794b5697c7-z99rt\" (UID: \"a691b713-173e-4931-85e9-1510e1a0ee6a\") " pod="service-telemetry/service-telemetry-operator-794b5697c7-z99rt" Jan 21 00:24:15 crc kubenswrapper[5118]: I0121 00:24:15.682081 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-794b5697c7-z99rt" Jan 21 00:24:15 crc kubenswrapper[5118]: I0121 00:24:15.885205 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-794b5697c7-z99rt"] Jan 21 00:24:15 crc kubenswrapper[5118]: W0121 00:24:15.900143 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda691b713_173e_4931_85e9_1510e1a0ee6a.slice/crio-ba5c5ca68cc61a8faae0f9b5e06dc24482718c9284c9277eeee568ca84fc7b62 WatchSource:0}: Error finding container ba5c5ca68cc61a8faae0f9b5e06dc24482718c9284c9277eeee568ca84fc7b62: Status 404 returned error can't find the container with id ba5c5ca68cc61a8faae0f9b5e06dc24482718c9284c9277eeee568ca84fc7b62 Jan 21 00:24:16 crc kubenswrapper[5118]: I0121 00:24:16.502273 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-794b5697c7-z99rt" event={"ID":"a691b713-173e-4931-85e9-1510e1a0ee6a","Type":"ContainerStarted","Data":"ba5c5ca68cc61a8faae0f9b5e06dc24482718c9284c9277eeee568ca84fc7b62"} Jan 21 00:24:20 crc kubenswrapper[5118]: I0121 00:24:20.163934 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-qhtlm" Jan 21 00:24:20 crc kubenswrapper[5118]: I0121 00:24:20.165497 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qhtlm" Jan 21 00:24:20 crc kubenswrapper[5118]: I0121 00:24:20.206799 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qhtlm" Jan 21 00:24:20 crc kubenswrapper[5118]: I0121 00:24:20.580772 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qhtlm" Jan 21 00:24:22 crc kubenswrapper[5118]: I0121 00:24:22.489329 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qhtlm"] Jan 21 00:24:23 crc kubenswrapper[5118]: I0121 00:24:23.555519 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-qhtlm" podUID="e5952abc-8250-4b8b-a5a1-ef89042a3e91" containerName="registry-server" containerID="cri-o://9a28e3fae4b2a86054134f4b28ad639003aee4d785a38588567fb906a493b29b" gracePeriod=2 Jan 21 00:24:25 crc kubenswrapper[5118]: I0121 00:24:25.457988 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qhtlm" Jan 21 00:24:25 crc kubenswrapper[5118]: I0121 00:24:25.528050 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5952abc-8250-4b8b-a5a1-ef89042a3e91-catalog-content\") pod \"e5952abc-8250-4b8b-a5a1-ef89042a3e91\" (UID: \"e5952abc-8250-4b8b-a5a1-ef89042a3e91\") " Jan 21 00:24:25 crc kubenswrapper[5118]: I0121 00:24:25.528938 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5952abc-8250-4b8b-a5a1-ef89042a3e91-utilities\") pod \"e5952abc-8250-4b8b-a5a1-ef89042a3e91\" (UID: \"e5952abc-8250-4b8b-a5a1-ef89042a3e91\") " Jan 21 00:24:25 crc kubenswrapper[5118]: I0121 00:24:25.529012 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fnmgw\" (UniqueName: \"kubernetes.io/projected/e5952abc-8250-4b8b-a5a1-ef89042a3e91-kube-api-access-fnmgw\") pod \"e5952abc-8250-4b8b-a5a1-ef89042a3e91\" (UID: \"e5952abc-8250-4b8b-a5a1-ef89042a3e91\") " Jan 21 00:24:25 crc kubenswrapper[5118]: I0121 00:24:25.531507 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5952abc-8250-4b8b-a5a1-ef89042a3e91-utilities" (OuterVolumeSpecName: "utilities") pod "e5952abc-8250-4b8b-a5a1-ef89042a3e91" (UID: "e5952abc-8250-4b8b-a5a1-ef89042a3e91"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 00:24:25 crc kubenswrapper[5118]: I0121 00:24:25.538036 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5952abc-8250-4b8b-a5a1-ef89042a3e91-kube-api-access-fnmgw" (OuterVolumeSpecName: "kube-api-access-fnmgw") pod "e5952abc-8250-4b8b-a5a1-ef89042a3e91" (UID: "e5952abc-8250-4b8b-a5a1-ef89042a3e91"). InnerVolumeSpecName "kube-api-access-fnmgw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:24:25 crc kubenswrapper[5118]: I0121 00:24:25.562571 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5952abc-8250-4b8b-a5a1-ef89042a3e91-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e5952abc-8250-4b8b-a5a1-ef89042a3e91" (UID: "e5952abc-8250-4b8b-a5a1-ef89042a3e91"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 00:24:25 crc kubenswrapper[5118]: I0121 00:24:25.575675 5118 generic.go:358] "Generic (PLEG): container finished" podID="e5952abc-8250-4b8b-a5a1-ef89042a3e91" containerID="9a28e3fae4b2a86054134f4b28ad639003aee4d785a38588567fb906a493b29b" exitCode=0 Jan 21 00:24:25 crc kubenswrapper[5118]: I0121 00:24:25.575814 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qhtlm" event={"ID":"e5952abc-8250-4b8b-a5a1-ef89042a3e91","Type":"ContainerDied","Data":"9a28e3fae4b2a86054134f4b28ad639003aee4d785a38588567fb906a493b29b"} Jan 21 00:24:25 crc kubenswrapper[5118]: I0121 00:24:25.575849 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qhtlm" event={"ID":"e5952abc-8250-4b8b-a5a1-ef89042a3e91","Type":"ContainerDied","Data":"597d571d05261346a3924151794daff92b01a195bfd1fd237fa3cdbfa7f78f88"} Jan 21 00:24:25 crc kubenswrapper[5118]: I0121 00:24:25.575873 5118 scope.go:117] "RemoveContainer" containerID="9a28e3fae4b2a86054134f4b28ad639003aee4d785a38588567fb906a493b29b" Jan 21 00:24:25 crc kubenswrapper[5118]: I0121 00:24:25.576057 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qhtlm" Jan 21 00:24:25 crc kubenswrapper[5118]: I0121 00:24:25.599517 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/interconnect-operator-78b9bd8798-9tpjv" podStartSLOduration=1.772533906 podStartE2EDuration="13.599501536s" podCreationTimestamp="2026-01-21 00:24:12 +0000 UTC" firstStartedPulling="2026-01-21 00:24:13.554983053 +0000 UTC m=+908.879230071" lastFinishedPulling="2026-01-21 00:24:25.381950673 +0000 UTC m=+920.706197701" observedRunningTime="2026-01-21 00:24:25.599305401 +0000 UTC m=+920.923552419" watchObservedRunningTime="2026-01-21 00:24:25.599501536 +0000 UTC m=+920.923748554" Jan 21 00:24:25 crc kubenswrapper[5118]: I0121 00:24:25.603241 5118 scope.go:117] "RemoveContainer" containerID="acffaae12e492969a687ceedb077f3a9805eecbfb83152784b5ebf5ab7f3eab5" Jan 21 00:24:25 crc kubenswrapper[5118]: I0121 00:24:25.617282 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qhtlm"] Jan 21 00:24:25 crc kubenswrapper[5118]: I0121 00:24:25.621690 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-qhtlm"] Jan 21 00:24:25 crc kubenswrapper[5118]: I0121 00:24:25.635170 5118 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5952abc-8250-4b8b-a5a1-ef89042a3e91-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 00:24:25 crc kubenswrapper[5118]: I0121 00:24:25.635210 5118 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5952abc-8250-4b8b-a5a1-ef89042a3e91-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 00:24:25 crc kubenswrapper[5118]: I0121 00:24:25.635225 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fnmgw\" (UniqueName: \"kubernetes.io/projected/e5952abc-8250-4b8b-a5a1-ef89042a3e91-kube-api-access-fnmgw\") on node \"crc\" DevicePath \"\"" Jan 21 00:24:25 crc kubenswrapper[5118]: I0121 00:24:25.635817 5118 scope.go:117] "RemoveContainer" containerID="ca3caf226f3a91200b99ca626b94d33a0fd1aae0e02338740a304a34ed3e85c7" Jan 21 00:24:25 crc kubenswrapper[5118]: I0121 00:24:25.659812 5118 scope.go:117] "RemoveContainer" containerID="9a28e3fae4b2a86054134f4b28ad639003aee4d785a38588567fb906a493b29b" Jan 21 00:24:25 crc kubenswrapper[5118]: E0121 00:24:25.661777 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a28e3fae4b2a86054134f4b28ad639003aee4d785a38588567fb906a493b29b\": container with ID starting with 9a28e3fae4b2a86054134f4b28ad639003aee4d785a38588567fb906a493b29b not found: ID does not exist" containerID="9a28e3fae4b2a86054134f4b28ad639003aee4d785a38588567fb906a493b29b" Jan 21 00:24:25 crc kubenswrapper[5118]: I0121 00:24:25.661813 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a28e3fae4b2a86054134f4b28ad639003aee4d785a38588567fb906a493b29b"} err="failed to get container status \"9a28e3fae4b2a86054134f4b28ad639003aee4d785a38588567fb906a493b29b\": rpc error: code = NotFound desc = could not find container \"9a28e3fae4b2a86054134f4b28ad639003aee4d785a38588567fb906a493b29b\": container with ID starting with 9a28e3fae4b2a86054134f4b28ad639003aee4d785a38588567fb906a493b29b not found: ID does not exist" Jan 21 00:24:25 crc kubenswrapper[5118]: I0121 00:24:25.661835 5118 scope.go:117] "RemoveContainer" containerID="acffaae12e492969a687ceedb077f3a9805eecbfb83152784b5ebf5ab7f3eab5" Jan 21 00:24:25 crc kubenswrapper[5118]: E0121 00:24:25.662061 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"acffaae12e492969a687ceedb077f3a9805eecbfb83152784b5ebf5ab7f3eab5\": container with ID starting with acffaae12e492969a687ceedb077f3a9805eecbfb83152784b5ebf5ab7f3eab5 not found: ID does not exist" containerID="acffaae12e492969a687ceedb077f3a9805eecbfb83152784b5ebf5ab7f3eab5" Jan 21 00:24:25 crc kubenswrapper[5118]: I0121 00:24:25.662114 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acffaae12e492969a687ceedb077f3a9805eecbfb83152784b5ebf5ab7f3eab5"} err="failed to get container status \"acffaae12e492969a687ceedb077f3a9805eecbfb83152784b5ebf5ab7f3eab5\": rpc error: code = NotFound desc = could not find container \"acffaae12e492969a687ceedb077f3a9805eecbfb83152784b5ebf5ab7f3eab5\": container with ID starting with acffaae12e492969a687ceedb077f3a9805eecbfb83152784b5ebf5ab7f3eab5 not found: ID does not exist" Jan 21 00:24:25 crc kubenswrapper[5118]: I0121 00:24:25.662143 5118 scope.go:117] "RemoveContainer" containerID="ca3caf226f3a91200b99ca626b94d33a0fd1aae0e02338740a304a34ed3e85c7" Jan 21 00:24:25 crc kubenswrapper[5118]: E0121 00:24:25.662328 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca3caf226f3a91200b99ca626b94d33a0fd1aae0e02338740a304a34ed3e85c7\": container with ID starting with ca3caf226f3a91200b99ca626b94d33a0fd1aae0e02338740a304a34ed3e85c7 not found: ID does not exist" containerID="ca3caf226f3a91200b99ca626b94d33a0fd1aae0e02338740a304a34ed3e85c7" Jan 21 00:24:25 crc kubenswrapper[5118]: I0121 00:24:25.662347 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca3caf226f3a91200b99ca626b94d33a0fd1aae0e02338740a304a34ed3e85c7"} err="failed to get container status \"ca3caf226f3a91200b99ca626b94d33a0fd1aae0e02338740a304a34ed3e85c7\": rpc error: code = NotFound desc = could not find container \"ca3caf226f3a91200b99ca626b94d33a0fd1aae0e02338740a304a34ed3e85c7\": container with ID starting with ca3caf226f3a91200b99ca626b94d33a0fd1aae0e02338740a304a34ed3e85c7 not found: ID does not exist" Jan 21 00:24:26 crc kubenswrapper[5118]: I0121 00:24:26.592535 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-794b5697c7-z99rt" event={"ID":"a691b713-173e-4931-85e9-1510e1a0ee6a","Type":"ContainerStarted","Data":"87154a25f087567b24ec6c54be7135de95fc854595fe40275bf183378e9e8996"} Jan 21 00:24:26 crc kubenswrapper[5118]: I0121 00:24:26.598421 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-78b9bd8798-9tpjv" event={"ID":"2f613dd9-bed2-40a6-aabc-5fa37c0dbbb2","Type":"ContainerStarted","Data":"8200a3bd6b4b46045f4a0c9115cbdde8785558a1ed29cfe579bb4dffeff59289"} Jan 21 00:24:26 crc kubenswrapper[5118]: I0121 00:24:26.614559 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-794b5697c7-z99rt" podStartSLOduration=2.083540131 podStartE2EDuration="11.614543616s" podCreationTimestamp="2026-01-21 00:24:15 +0000 UTC" firstStartedPulling="2026-01-21 00:24:15.911931584 +0000 UTC m=+911.236178602" lastFinishedPulling="2026-01-21 00:24:25.442935069 +0000 UTC m=+920.767182087" observedRunningTime="2026-01-21 00:24:26.612859811 +0000 UTC m=+921.937106829" watchObservedRunningTime="2026-01-21 00:24:26.614543616 +0000 UTC m=+921.938790634" Jan 21 00:24:26 crc kubenswrapper[5118]: I0121 00:24:26.983740 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5952abc-8250-4b8b-a5a1-ef89042a3e91" path="/var/lib/kubelet/pods/e5952abc-8250-4b8b-a5a1-ef89042a3e91/volumes" Jan 21 00:24:33 crc kubenswrapper[5118]: I0121 00:24:33.800491 5118 patch_prober.go:28] interesting pod/machine-config-daemon-22r9n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 00:24:33 crc kubenswrapper[5118]: I0121 00:24:33.800951 5118 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 00:24:33 crc kubenswrapper[5118]: I0121 00:24:33.800993 5118 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" Jan 21 00:24:33 crc kubenswrapper[5118]: I0121 00:24:33.801613 5118 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f02b486c7f526cea45f0bc8e93498ac542cc749b10fbe7b2dc9e854f825b1f31"} pod="openshift-machine-config-operator/machine-config-daemon-22r9n" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 00:24:33 crc kubenswrapper[5118]: I0121 00:24:33.801665 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" containerName="machine-config-daemon" containerID="cri-o://f02b486c7f526cea45f0bc8e93498ac542cc749b10fbe7b2dc9e854f825b1f31" gracePeriod=600 Jan 21 00:24:34 crc kubenswrapper[5118]: I0121 00:24:34.661307 5118 generic.go:358] "Generic (PLEG): container finished" podID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" containerID="f02b486c7f526cea45f0bc8e93498ac542cc749b10fbe7b2dc9e854f825b1f31" exitCode=0 Jan 21 00:24:34 crc kubenswrapper[5118]: I0121 00:24:34.661729 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" event={"ID":"44eb9bc7-60a3-421c-bf5e-d1d9a5026435","Type":"ContainerDied","Data":"f02b486c7f526cea45f0bc8e93498ac542cc749b10fbe7b2dc9e854f825b1f31"} Jan 21 00:24:34 crc kubenswrapper[5118]: I0121 00:24:34.661767 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" event={"ID":"44eb9bc7-60a3-421c-bf5e-d1d9a5026435","Type":"ContainerStarted","Data":"61c6c2137480cc175302c2e82d6bbb9c15151d1ca8b8cb9acea1f49282d3488a"} Jan 21 00:24:34 crc kubenswrapper[5118]: I0121 00:24:34.661791 5118 scope.go:117] "RemoveContainer" containerID="7922e95afa9e80095c69f7b0a751dd320865224ec2831af4c9a2dcde9659cd54" Jan 21 00:24:47 crc kubenswrapper[5118]: I0121 00:24:47.191866 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-8wgmh"] Jan 21 00:24:47 crc kubenswrapper[5118]: I0121 00:24:47.193897 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e5952abc-8250-4b8b-a5a1-ef89042a3e91" containerName="extract-utilities" Jan 21 00:24:47 crc kubenswrapper[5118]: I0121 00:24:47.193925 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5952abc-8250-4b8b-a5a1-ef89042a3e91" containerName="extract-utilities" Jan 21 00:24:47 crc kubenswrapper[5118]: I0121 00:24:47.193936 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e5952abc-8250-4b8b-a5a1-ef89042a3e91" containerName="extract-content" Jan 21 00:24:47 crc kubenswrapper[5118]: I0121 00:24:47.193943 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5952abc-8250-4b8b-a5a1-ef89042a3e91" containerName="extract-content" Jan 21 00:24:47 crc kubenswrapper[5118]: I0121 00:24:47.193953 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e5952abc-8250-4b8b-a5a1-ef89042a3e91" containerName="registry-server" Jan 21 00:24:47 crc kubenswrapper[5118]: I0121 00:24:47.193960 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5952abc-8250-4b8b-a5a1-ef89042a3e91" containerName="registry-server" Jan 21 00:24:47 crc kubenswrapper[5118]: I0121 00:24:47.194299 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="e5952abc-8250-4b8b-a5a1-ef89042a3e91" containerName="registry-server" Jan 21 00:24:47 crc kubenswrapper[5118]: I0121 00:24:47.203887 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-8wgmh" Jan 21 00:24:47 crc kubenswrapper[5118]: I0121 00:24:47.206272 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-openstack-ca\"" Jan 21 00:24:47 crc kubenswrapper[5118]: I0121 00:24:47.206557 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-users\"" Jan 21 00:24:47 crc kubenswrapper[5118]: I0121 00:24:47.208713 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-interconnect-sasl-config\"" Jan 21 00:24:47 crc kubenswrapper[5118]: I0121 00:24:47.209095 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-inter-router-credentials\"" Jan 21 00:24:47 crc kubenswrapper[5118]: I0121 00:24:47.209282 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-dockercfg-6mxjb\"" Jan 21 00:24:47 crc kubenswrapper[5118]: I0121 00:24:47.209400 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-inter-router-ca\"" Jan 21 00:24:47 crc kubenswrapper[5118]: I0121 00:24:47.209661 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-openstack-credentials\"" Jan 21 00:24:47 crc kubenswrapper[5118]: I0121 00:24:47.213757 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-8wgmh"] Jan 21 00:24:47 crc kubenswrapper[5118]: I0121 00:24:47.235640 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26hjq\" (UniqueName: \"kubernetes.io/projected/c7620811-1f39-4605-90d6-9b447532937f-kube-api-access-26hjq\") pod \"default-interconnect-55bf8d5cb-8wgmh\" (UID: \"c7620811-1f39-4605-90d6-9b447532937f\") " pod="service-telemetry/default-interconnect-55bf8d5cb-8wgmh" Jan 21 00:24:47 crc kubenswrapper[5118]: I0121 00:24:47.235794 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/c7620811-1f39-4605-90d6-9b447532937f-sasl-config\") pod \"default-interconnect-55bf8d5cb-8wgmh\" (UID: \"c7620811-1f39-4605-90d6-9b447532937f\") " pod="service-telemetry/default-interconnect-55bf8d5cb-8wgmh" Jan 21 00:24:47 crc kubenswrapper[5118]: I0121 00:24:47.235883 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/c7620811-1f39-4605-90d6-9b447532937f-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-8wgmh\" (UID: \"c7620811-1f39-4605-90d6-9b447532937f\") " pod="service-telemetry/default-interconnect-55bf8d5cb-8wgmh" Jan 21 00:24:47 crc kubenswrapper[5118]: I0121 00:24:47.235906 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/c7620811-1f39-4605-90d6-9b447532937f-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-8wgmh\" (UID: \"c7620811-1f39-4605-90d6-9b447532937f\") " pod="service-telemetry/default-interconnect-55bf8d5cb-8wgmh" Jan 21 00:24:47 crc kubenswrapper[5118]: I0121 00:24:47.235965 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/c7620811-1f39-4605-90d6-9b447532937f-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-8wgmh\" (UID: \"c7620811-1f39-4605-90d6-9b447532937f\") " pod="service-telemetry/default-interconnect-55bf8d5cb-8wgmh" Jan 21 00:24:47 crc kubenswrapper[5118]: I0121 00:24:47.236127 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/c7620811-1f39-4605-90d6-9b447532937f-sasl-users\") pod \"default-interconnect-55bf8d5cb-8wgmh\" (UID: \"c7620811-1f39-4605-90d6-9b447532937f\") " pod="service-telemetry/default-interconnect-55bf8d5cb-8wgmh" Jan 21 00:24:47 crc kubenswrapper[5118]: I0121 00:24:47.236241 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/c7620811-1f39-4605-90d6-9b447532937f-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-8wgmh\" (UID: \"c7620811-1f39-4605-90d6-9b447532937f\") " pod="service-telemetry/default-interconnect-55bf8d5cb-8wgmh" Jan 21 00:24:47 crc kubenswrapper[5118]: I0121 00:24:47.337864 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-26hjq\" (UniqueName: \"kubernetes.io/projected/c7620811-1f39-4605-90d6-9b447532937f-kube-api-access-26hjq\") pod \"default-interconnect-55bf8d5cb-8wgmh\" (UID: \"c7620811-1f39-4605-90d6-9b447532937f\") " pod="service-telemetry/default-interconnect-55bf8d5cb-8wgmh" Jan 21 00:24:47 crc kubenswrapper[5118]: I0121 00:24:47.338205 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/c7620811-1f39-4605-90d6-9b447532937f-sasl-config\") pod \"default-interconnect-55bf8d5cb-8wgmh\" (UID: \"c7620811-1f39-4605-90d6-9b447532937f\") " pod="service-telemetry/default-interconnect-55bf8d5cb-8wgmh" Jan 21 00:24:47 crc kubenswrapper[5118]: I0121 00:24:47.338306 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/c7620811-1f39-4605-90d6-9b447532937f-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-8wgmh\" (UID: \"c7620811-1f39-4605-90d6-9b447532937f\") " pod="service-telemetry/default-interconnect-55bf8d5cb-8wgmh" Jan 21 00:24:47 crc kubenswrapper[5118]: I0121 00:24:47.338390 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/c7620811-1f39-4605-90d6-9b447532937f-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-8wgmh\" (UID: \"c7620811-1f39-4605-90d6-9b447532937f\") " pod="service-telemetry/default-interconnect-55bf8d5cb-8wgmh" Jan 21 00:24:47 crc kubenswrapper[5118]: I0121 00:24:47.338468 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/c7620811-1f39-4605-90d6-9b447532937f-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-8wgmh\" (UID: \"c7620811-1f39-4605-90d6-9b447532937f\") " pod="service-telemetry/default-interconnect-55bf8d5cb-8wgmh" Jan 21 00:24:47 crc kubenswrapper[5118]: I0121 00:24:47.338567 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/c7620811-1f39-4605-90d6-9b447532937f-sasl-users\") pod \"default-interconnect-55bf8d5cb-8wgmh\" (UID: \"c7620811-1f39-4605-90d6-9b447532937f\") " pod="service-telemetry/default-interconnect-55bf8d5cb-8wgmh" Jan 21 00:24:47 crc kubenswrapper[5118]: I0121 00:24:47.338658 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/c7620811-1f39-4605-90d6-9b447532937f-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-8wgmh\" (UID: \"c7620811-1f39-4605-90d6-9b447532937f\") " pod="service-telemetry/default-interconnect-55bf8d5cb-8wgmh" Jan 21 00:24:47 crc kubenswrapper[5118]: I0121 00:24:47.339222 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/c7620811-1f39-4605-90d6-9b447532937f-sasl-config\") pod \"default-interconnect-55bf8d5cb-8wgmh\" (UID: \"c7620811-1f39-4605-90d6-9b447532937f\") " pod="service-telemetry/default-interconnect-55bf8d5cb-8wgmh" Jan 21 00:24:47 crc kubenswrapper[5118]: I0121 00:24:47.348127 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/c7620811-1f39-4605-90d6-9b447532937f-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-8wgmh\" (UID: \"c7620811-1f39-4605-90d6-9b447532937f\") " pod="service-telemetry/default-interconnect-55bf8d5cb-8wgmh" Jan 21 00:24:47 crc kubenswrapper[5118]: I0121 00:24:47.351712 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/c7620811-1f39-4605-90d6-9b447532937f-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-8wgmh\" (UID: \"c7620811-1f39-4605-90d6-9b447532937f\") " pod="service-telemetry/default-interconnect-55bf8d5cb-8wgmh" Jan 21 00:24:47 crc kubenswrapper[5118]: I0121 00:24:47.352623 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/c7620811-1f39-4605-90d6-9b447532937f-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-8wgmh\" (UID: \"c7620811-1f39-4605-90d6-9b447532937f\") " pod="service-telemetry/default-interconnect-55bf8d5cb-8wgmh" Jan 21 00:24:47 crc kubenswrapper[5118]: I0121 00:24:47.353197 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/c7620811-1f39-4605-90d6-9b447532937f-sasl-users\") pod \"default-interconnect-55bf8d5cb-8wgmh\" (UID: \"c7620811-1f39-4605-90d6-9b447532937f\") " pod="service-telemetry/default-interconnect-55bf8d5cb-8wgmh" Jan 21 00:24:47 crc kubenswrapper[5118]: I0121 00:24:47.353367 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/c7620811-1f39-4605-90d6-9b447532937f-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-8wgmh\" (UID: \"c7620811-1f39-4605-90d6-9b447532937f\") " pod="service-telemetry/default-interconnect-55bf8d5cb-8wgmh" Jan 21 00:24:47 crc kubenswrapper[5118]: I0121 00:24:47.363096 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-26hjq\" (UniqueName: \"kubernetes.io/projected/c7620811-1f39-4605-90d6-9b447532937f-kube-api-access-26hjq\") pod \"default-interconnect-55bf8d5cb-8wgmh\" (UID: \"c7620811-1f39-4605-90d6-9b447532937f\") " pod="service-telemetry/default-interconnect-55bf8d5cb-8wgmh" Jan 21 00:24:47 crc kubenswrapper[5118]: I0121 00:24:47.529331 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-8wgmh" Jan 21 00:24:47 crc kubenswrapper[5118]: I0121 00:24:47.956594 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-8wgmh"] Jan 21 00:24:47 crc kubenswrapper[5118]: I0121 00:24:47.966236 5118 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 00:24:48 crc kubenswrapper[5118]: I0121 00:24:48.777058 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-8wgmh" event={"ID":"c7620811-1f39-4605-90d6-9b447532937f","Type":"ContainerStarted","Data":"bf3233a447ba0be5b78ee5f3d88faacd0222e3b672c9126feed07e5fa139fd2c"} Jan 21 00:24:52 crc kubenswrapper[5118]: I0121 00:24:52.822066 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-8wgmh" event={"ID":"c7620811-1f39-4605-90d6-9b447532937f","Type":"ContainerStarted","Data":"f1fc1959422e725ed561d62b5a5294dd7d8683eaf6ebe4655d0f9145228a6273"} Jan 21 00:24:52 crc kubenswrapper[5118]: I0121 00:24:52.845061 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-interconnect-55bf8d5cb-8wgmh" podStartSLOduration=1.190822823 podStartE2EDuration="5.845038907s" podCreationTimestamp="2026-01-21 00:24:47 +0000 UTC" firstStartedPulling="2026-01-21 00:24:47.966429257 +0000 UTC m=+943.290676275" lastFinishedPulling="2026-01-21 00:24:52.620645301 +0000 UTC m=+947.944892359" observedRunningTime="2026-01-21 00:24:52.838499813 +0000 UTC m=+948.162746851" watchObservedRunningTime="2026-01-21 00:24:52.845038907 +0000 UTC m=+948.169285925" Jan 21 00:24:57 crc kubenswrapper[5118]: I0121 00:24:57.261407 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/prometheus-default-0"] Jan 21 00:24:58 crc kubenswrapper[5118]: I0121 00:24:58.304203 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-default-0"] Jan 21 00:24:58 crc kubenswrapper[5118]: I0121 00:24:58.304416 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-default-0" Jan 21 00:24:58 crc kubenswrapper[5118]: I0121 00:24:58.309753 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"serving-certs-ca-bundle\"" Jan 21 00:24:58 crc kubenswrapper[5118]: I0121 00:24:58.310070 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-prometheus-proxy-tls\"" Jan 21 00:24:58 crc kubenswrapper[5118]: I0121 00:24:58.310260 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-default-rulefiles-1\"" Jan 21 00:24:58 crc kubenswrapper[5118]: I0121 00:24:58.310422 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"prometheus-default\"" Jan 21 00:24:58 crc kubenswrapper[5118]: I0121 00:24:58.310584 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"prometheus-default-web-config\"" Jan 21 00:24:58 crc kubenswrapper[5118]: I0121 00:24:58.310713 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"prometheus-stf-dockercfg-txcbv\"" Jan 21 00:24:58 crc kubenswrapper[5118]: I0121 00:24:58.310820 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-default-rulefiles-0\"" Jan 21 00:24:58 crc kubenswrapper[5118]: I0121 00:24:58.310950 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"prometheus-default-tls-assets-0\"" Jan 21 00:24:58 crc kubenswrapper[5118]: I0121 00:24:58.311069 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-session-secret\"" Jan 21 00:24:58 crc kubenswrapper[5118]: I0121 00:24:58.311327 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-default-rulefiles-2\"" Jan 21 00:24:58 crc kubenswrapper[5118]: I0121 00:24:58.400435 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/ee3806b6-31c0-470c-8dcf-9f7e40a5929a-config-out\") pod \"prometheus-default-0\" (UID: \"ee3806b6-31c0-470c-8dcf-9f7e40a5929a\") " pod="service-telemetry/prometheus-default-0" Jan 21 00:24:58 crc kubenswrapper[5118]: I0121 00:24:58.400499 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/ee3806b6-31c0-470c-8dcf-9f7e40a5929a-web-config\") pod \"prometheus-default-0\" (UID: \"ee3806b6-31c0-470c-8dcf-9f7e40a5929a\") " pod="service-telemetry/prometheus-default-0" Jan 21 00:24:58 crc kubenswrapper[5118]: I0121 00:24:58.400536 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/ee3806b6-31c0-470c-8dcf-9f7e40a5929a-prometheus-default-rulefiles-1\") pod \"prometheus-default-0\" (UID: \"ee3806b6-31c0-470c-8dcf-9f7e40a5929a\") " pod="service-telemetry/prometheus-default-0" Jan 21 00:24:58 crc kubenswrapper[5118]: I0121 00:24:58.400565 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ee3806b6-31c0-470c-8dcf-9f7e40a5929a-config\") pod \"prometheus-default-0\" (UID: \"ee3806b6-31c0-470c-8dcf-9f7e40a5929a\") " pod="service-telemetry/prometheus-default-0" Jan 21 00:24:58 crc kubenswrapper[5118]: I0121 00:24:58.400742 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b5e9ad0b-9a5d-4667-8c9a-311d941308e1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b5e9ad0b-9a5d-4667-8c9a-311d941308e1\") pod \"prometheus-default-0\" (UID: \"ee3806b6-31c0-470c-8dcf-9f7e40a5929a\") " pod="service-telemetry/prometheus-default-0" Jan 21 00:24:58 crc kubenswrapper[5118]: I0121 00:24:58.400876 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/ee3806b6-31c0-470c-8dcf-9f7e40a5929a-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"ee3806b6-31c0-470c-8dcf-9f7e40a5929a\") " pod="service-telemetry/prometheus-default-0" Jan 21 00:24:58 crc kubenswrapper[5118]: I0121 00:24:58.400911 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/ee3806b6-31c0-470c-8dcf-9f7e40a5929a-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"ee3806b6-31c0-470c-8dcf-9f7e40a5929a\") " pod="service-telemetry/prometheus-default-0" Jan 21 00:24:58 crc kubenswrapper[5118]: I0121 00:24:58.400945 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee3806b6-31c0-470c-8dcf-9f7e40a5929a-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"ee3806b6-31c0-470c-8dcf-9f7e40a5929a\") " pod="service-telemetry/prometheus-default-0" Jan 21 00:24:58 crc kubenswrapper[5118]: I0121 00:24:58.400980 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/ee3806b6-31c0-470c-8dcf-9f7e40a5929a-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"ee3806b6-31c0-470c-8dcf-9f7e40a5929a\") " pod="service-telemetry/prometheus-default-0" Jan 21 00:24:58 crc kubenswrapper[5118]: I0121 00:24:58.401025 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bs9c\" (UniqueName: \"kubernetes.io/projected/ee3806b6-31c0-470c-8dcf-9f7e40a5929a-kube-api-access-7bs9c\") pod \"prometheus-default-0\" (UID: \"ee3806b6-31c0-470c-8dcf-9f7e40a5929a\") " pod="service-telemetry/prometheus-default-0" Jan 21 00:24:58 crc kubenswrapper[5118]: I0121 00:24:58.401133 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/ee3806b6-31c0-470c-8dcf-9f7e40a5929a-prometheus-default-rulefiles-2\") pod \"prometheus-default-0\" (UID: \"ee3806b6-31c0-470c-8dcf-9f7e40a5929a\") " pod="service-telemetry/prometheus-default-0" Jan 21 00:24:58 crc kubenswrapper[5118]: I0121 00:24:58.401211 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/ee3806b6-31c0-470c-8dcf-9f7e40a5929a-tls-assets\") pod \"prometheus-default-0\" (UID: \"ee3806b6-31c0-470c-8dcf-9f7e40a5929a\") " pod="service-telemetry/prometheus-default-0" Jan 21 00:24:58 crc kubenswrapper[5118]: I0121 00:24:58.502571 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/ee3806b6-31c0-470c-8dcf-9f7e40a5929a-web-config\") pod \"prometheus-default-0\" (UID: \"ee3806b6-31c0-470c-8dcf-9f7e40a5929a\") " pod="service-telemetry/prometheus-default-0" Jan 21 00:24:58 crc kubenswrapper[5118]: I0121 00:24:58.502661 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/ee3806b6-31c0-470c-8dcf-9f7e40a5929a-prometheus-default-rulefiles-1\") pod \"prometheus-default-0\" (UID: \"ee3806b6-31c0-470c-8dcf-9f7e40a5929a\") " pod="service-telemetry/prometheus-default-0" Jan 21 00:24:58 crc kubenswrapper[5118]: I0121 00:24:58.502685 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ee3806b6-31c0-470c-8dcf-9f7e40a5929a-config\") pod \"prometheus-default-0\" (UID: \"ee3806b6-31c0-470c-8dcf-9f7e40a5929a\") " pod="service-telemetry/prometheus-default-0" Jan 21 00:24:58 crc kubenswrapper[5118]: I0121 00:24:58.502730 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b5e9ad0b-9a5d-4667-8c9a-311d941308e1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b5e9ad0b-9a5d-4667-8c9a-311d941308e1\") pod \"prometheus-default-0\" (UID: \"ee3806b6-31c0-470c-8dcf-9f7e40a5929a\") " pod="service-telemetry/prometheus-default-0" Jan 21 00:24:58 crc kubenswrapper[5118]: I0121 00:24:58.502922 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/ee3806b6-31c0-470c-8dcf-9f7e40a5929a-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"ee3806b6-31c0-470c-8dcf-9f7e40a5929a\") " pod="service-telemetry/prometheus-default-0" Jan 21 00:24:58 crc kubenswrapper[5118]: I0121 00:24:58.502966 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/ee3806b6-31c0-470c-8dcf-9f7e40a5929a-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"ee3806b6-31c0-470c-8dcf-9f7e40a5929a\") " pod="service-telemetry/prometheus-default-0" Jan 21 00:24:58 crc kubenswrapper[5118]: I0121 00:24:58.502995 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee3806b6-31c0-470c-8dcf-9f7e40a5929a-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"ee3806b6-31c0-470c-8dcf-9f7e40a5929a\") " pod="service-telemetry/prometheus-default-0" Jan 21 00:24:58 crc kubenswrapper[5118]: I0121 00:24:58.503023 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/ee3806b6-31c0-470c-8dcf-9f7e40a5929a-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"ee3806b6-31c0-470c-8dcf-9f7e40a5929a\") " pod="service-telemetry/prometheus-default-0" Jan 21 00:24:58 crc kubenswrapper[5118]: I0121 00:24:58.503052 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7bs9c\" (UniqueName: \"kubernetes.io/projected/ee3806b6-31c0-470c-8dcf-9f7e40a5929a-kube-api-access-7bs9c\") pod \"prometheus-default-0\" (UID: \"ee3806b6-31c0-470c-8dcf-9f7e40a5929a\") " pod="service-telemetry/prometheus-default-0" Jan 21 00:24:58 crc kubenswrapper[5118]: E0121 00:24:58.503084 5118 secret.go:189] Couldn't get secret service-telemetry/default-prometheus-proxy-tls: secret "default-prometheus-proxy-tls" not found Jan 21 00:24:58 crc kubenswrapper[5118]: I0121 00:24:58.503123 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/ee3806b6-31c0-470c-8dcf-9f7e40a5929a-prometheus-default-rulefiles-2\") pod \"prometheus-default-0\" (UID: \"ee3806b6-31c0-470c-8dcf-9f7e40a5929a\") " pod="service-telemetry/prometheus-default-0" Jan 21 00:24:58 crc kubenswrapper[5118]: I0121 00:24:58.503196 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/ee3806b6-31c0-470c-8dcf-9f7e40a5929a-tls-assets\") pod \"prometheus-default-0\" (UID: \"ee3806b6-31c0-470c-8dcf-9f7e40a5929a\") " pod="service-telemetry/prometheus-default-0" Jan 21 00:24:58 crc kubenswrapper[5118]: E0121 00:24:58.503214 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ee3806b6-31c0-470c-8dcf-9f7e40a5929a-secret-default-prometheus-proxy-tls podName:ee3806b6-31c0-470c-8dcf-9f7e40a5929a nodeName:}" failed. No retries permitted until 2026-01-21 00:24:59.003197504 +0000 UTC m=+954.327444522 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-default-prometheus-proxy-tls" (UniqueName: "kubernetes.io/secret/ee3806b6-31c0-470c-8dcf-9f7e40a5929a-secret-default-prometheus-proxy-tls") pod "prometheus-default-0" (UID: "ee3806b6-31c0-470c-8dcf-9f7e40a5929a") : secret "default-prometheus-proxy-tls" not found Jan 21 00:24:58 crc kubenswrapper[5118]: I0121 00:24:58.503268 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/ee3806b6-31c0-470c-8dcf-9f7e40a5929a-config-out\") pod \"prometheus-default-0\" (UID: \"ee3806b6-31c0-470c-8dcf-9f7e40a5929a\") " pod="service-telemetry/prometheus-default-0" Jan 21 00:24:58 crc kubenswrapper[5118]: I0121 00:24:58.503750 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/ee3806b6-31c0-470c-8dcf-9f7e40a5929a-prometheus-default-rulefiles-1\") pod \"prometheus-default-0\" (UID: \"ee3806b6-31c0-470c-8dcf-9f7e40a5929a\") " pod="service-telemetry/prometheus-default-0" Jan 21 00:24:58 crc kubenswrapper[5118]: I0121 00:24:58.503914 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee3806b6-31c0-470c-8dcf-9f7e40a5929a-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"ee3806b6-31c0-470c-8dcf-9f7e40a5929a\") " pod="service-telemetry/prometheus-default-0" Jan 21 00:24:58 crc kubenswrapper[5118]: I0121 00:24:58.504269 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/ee3806b6-31c0-470c-8dcf-9f7e40a5929a-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"ee3806b6-31c0-470c-8dcf-9f7e40a5929a\") " pod="service-telemetry/prometheus-default-0" Jan 21 00:24:58 crc kubenswrapper[5118]: I0121 00:24:58.504597 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/ee3806b6-31c0-470c-8dcf-9f7e40a5929a-prometheus-default-rulefiles-2\") pod \"prometheus-default-0\" (UID: \"ee3806b6-31c0-470c-8dcf-9f7e40a5929a\") " pod="service-telemetry/prometheus-default-0" Jan 21 00:24:58 crc kubenswrapper[5118]: I0121 00:24:58.510604 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/ee3806b6-31c0-470c-8dcf-9f7e40a5929a-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"ee3806b6-31c0-470c-8dcf-9f7e40a5929a\") " pod="service-telemetry/prometheus-default-0" Jan 21 00:24:58 crc kubenswrapper[5118]: I0121 00:24:58.510655 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/ee3806b6-31c0-470c-8dcf-9f7e40a5929a-tls-assets\") pod \"prometheus-default-0\" (UID: \"ee3806b6-31c0-470c-8dcf-9f7e40a5929a\") " pod="service-telemetry/prometheus-default-0" Jan 21 00:24:58 crc kubenswrapper[5118]: I0121 00:24:58.511043 5118 csi_attacher.go:373] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 21 00:24:58 crc kubenswrapper[5118]: I0121 00:24:58.511086 5118 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-b5e9ad0b-9a5d-4667-8c9a-311d941308e1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b5e9ad0b-9a5d-4667-8c9a-311d941308e1\") pod \"prometheus-default-0\" (UID: \"ee3806b6-31c0-470c-8dcf-9f7e40a5929a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/c72ddea7d9285e719517431978c123570eb381e202b248ee7a1f0dfd4b34a7a3/globalmount\"" pod="service-telemetry/prometheus-default-0" Jan 21 00:24:58 crc kubenswrapper[5118]: I0121 00:24:58.512689 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/ee3806b6-31c0-470c-8dcf-9f7e40a5929a-config\") pod \"prometheus-default-0\" (UID: \"ee3806b6-31c0-470c-8dcf-9f7e40a5929a\") " pod="service-telemetry/prometheus-default-0" Jan 21 00:24:58 crc kubenswrapper[5118]: I0121 00:24:58.514613 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/ee3806b6-31c0-470c-8dcf-9f7e40a5929a-web-config\") pod \"prometheus-default-0\" (UID: \"ee3806b6-31c0-470c-8dcf-9f7e40a5929a\") " pod="service-telemetry/prometheus-default-0" Jan 21 00:24:58 crc kubenswrapper[5118]: I0121 00:24:58.519625 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/ee3806b6-31c0-470c-8dcf-9f7e40a5929a-config-out\") pod \"prometheus-default-0\" (UID: \"ee3806b6-31c0-470c-8dcf-9f7e40a5929a\") " pod="service-telemetry/prometheus-default-0" Jan 21 00:24:58 crc kubenswrapper[5118]: I0121 00:24:58.527334 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bs9c\" (UniqueName: \"kubernetes.io/projected/ee3806b6-31c0-470c-8dcf-9f7e40a5929a-kube-api-access-7bs9c\") pod \"prometheus-default-0\" (UID: \"ee3806b6-31c0-470c-8dcf-9f7e40a5929a\") " pod="service-telemetry/prometheus-default-0" Jan 21 00:24:58 crc kubenswrapper[5118]: I0121 00:24:58.536208 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-b5e9ad0b-9a5d-4667-8c9a-311d941308e1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b5e9ad0b-9a5d-4667-8c9a-311d941308e1\") pod \"prometheus-default-0\" (UID: \"ee3806b6-31c0-470c-8dcf-9f7e40a5929a\") " pod="service-telemetry/prometheus-default-0" Jan 21 00:24:59 crc kubenswrapper[5118]: I0121 00:24:59.011054 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/ee3806b6-31c0-470c-8dcf-9f7e40a5929a-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"ee3806b6-31c0-470c-8dcf-9f7e40a5929a\") " pod="service-telemetry/prometheus-default-0" Jan 21 00:24:59 crc kubenswrapper[5118]: E0121 00:24:59.011282 5118 secret.go:189] Couldn't get secret service-telemetry/default-prometheus-proxy-tls: secret "default-prometheus-proxy-tls" not found Jan 21 00:24:59 crc kubenswrapper[5118]: E0121 00:24:59.011515 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ee3806b6-31c0-470c-8dcf-9f7e40a5929a-secret-default-prometheus-proxy-tls podName:ee3806b6-31c0-470c-8dcf-9f7e40a5929a nodeName:}" failed. No retries permitted until 2026-01-21 00:25:00.011495734 +0000 UTC m=+955.335742752 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-default-prometheus-proxy-tls" (UniqueName: "kubernetes.io/secret/ee3806b6-31c0-470c-8dcf-9f7e40a5929a-secret-default-prometheus-proxy-tls") pod "prometheus-default-0" (UID: "ee3806b6-31c0-470c-8dcf-9f7e40a5929a") : secret "default-prometheus-proxy-tls" not found Jan 21 00:25:00 crc kubenswrapper[5118]: I0121 00:25:00.023116 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/ee3806b6-31c0-470c-8dcf-9f7e40a5929a-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"ee3806b6-31c0-470c-8dcf-9f7e40a5929a\") " pod="service-telemetry/prometheus-default-0" Jan 21 00:25:00 crc kubenswrapper[5118]: I0121 00:25:00.030894 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/ee3806b6-31c0-470c-8dcf-9f7e40a5929a-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"ee3806b6-31c0-470c-8dcf-9f7e40a5929a\") " pod="service-telemetry/prometheus-default-0" Jan 21 00:25:00 crc kubenswrapper[5118]: I0121 00:25:00.126007 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-default-0" Jan 21 00:25:00 crc kubenswrapper[5118]: I0121 00:25:00.351239 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-default-0"] Jan 21 00:25:00 crc kubenswrapper[5118]: I0121 00:25:00.935571 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"ee3806b6-31c0-470c-8dcf-9f7e40a5929a","Type":"ContainerStarted","Data":"e68711d00003eaf0a91c805573477e585fbdcd903ad5665c7304a1ed9798c5a4"} Jan 21 00:25:04 crc kubenswrapper[5118]: I0121 00:25:04.986691 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"ee3806b6-31c0-470c-8dcf-9f7e40a5929a","Type":"ContainerStarted","Data":"ba536a5149227d9c101ca361365b92cc36ff7bd3ea6ed750d5846aa6e6665cb3"} Jan 21 00:25:06 crc kubenswrapper[5118]: I0121 00:25:06.140148 5118 scope.go:117] "RemoveContainer" containerID="d22738ae34a46dfe49021bd3762832f99742cb30853bddfdb2566d7bc46129e9" Jan 21 00:25:08 crc kubenswrapper[5118]: I0121 00:25:08.498411 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-snmp-webhook-6774d8dfbc-t9sb4"] Jan 21 00:25:08 crc kubenswrapper[5118]: I0121 00:25:08.549368 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-snmp-webhook-6774d8dfbc-t9sb4"] Jan 21 00:25:08 crc kubenswrapper[5118]: I0121 00:25:08.549526 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-snmp-webhook-6774d8dfbc-t9sb4" Jan 21 00:25:08 crc kubenswrapper[5118]: I0121 00:25:08.658707 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8c9mf\" (UniqueName: \"kubernetes.io/projected/7209940f-514b-4189-beeb-fbd77f7e6a15-kube-api-access-8c9mf\") pod \"default-snmp-webhook-6774d8dfbc-t9sb4\" (UID: \"7209940f-514b-4189-beeb-fbd77f7e6a15\") " pod="service-telemetry/default-snmp-webhook-6774d8dfbc-t9sb4" Jan 21 00:25:08 crc kubenswrapper[5118]: I0121 00:25:08.760547 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8c9mf\" (UniqueName: \"kubernetes.io/projected/7209940f-514b-4189-beeb-fbd77f7e6a15-kube-api-access-8c9mf\") pod \"default-snmp-webhook-6774d8dfbc-t9sb4\" (UID: \"7209940f-514b-4189-beeb-fbd77f7e6a15\") " pod="service-telemetry/default-snmp-webhook-6774d8dfbc-t9sb4" Jan 21 00:25:08 crc kubenswrapper[5118]: I0121 00:25:08.786491 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8c9mf\" (UniqueName: \"kubernetes.io/projected/7209940f-514b-4189-beeb-fbd77f7e6a15-kube-api-access-8c9mf\") pod \"default-snmp-webhook-6774d8dfbc-t9sb4\" (UID: \"7209940f-514b-4189-beeb-fbd77f7e6a15\") " pod="service-telemetry/default-snmp-webhook-6774d8dfbc-t9sb4" Jan 21 00:25:08 crc kubenswrapper[5118]: I0121 00:25:08.866420 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-snmp-webhook-6774d8dfbc-t9sb4" Jan 21 00:25:09 crc kubenswrapper[5118]: I0121 00:25:09.339089 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-snmp-webhook-6774d8dfbc-t9sb4"] Jan 21 00:25:10 crc kubenswrapper[5118]: I0121 00:25:10.025371 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-snmp-webhook-6774d8dfbc-t9sb4" event={"ID":"7209940f-514b-4189-beeb-fbd77f7e6a15","Type":"ContainerStarted","Data":"e073779e860e5c300e8745bf713582819510c5bab6eafa859c16f59dcfb84531"} Jan 21 00:25:12 crc kubenswrapper[5118]: I0121 00:25:12.043058 5118 generic.go:358] "Generic (PLEG): container finished" podID="ee3806b6-31c0-470c-8dcf-9f7e40a5929a" containerID="ba536a5149227d9c101ca361365b92cc36ff7bd3ea6ed750d5846aa6e6665cb3" exitCode=0 Jan 21 00:25:12 crc kubenswrapper[5118]: I0121 00:25:12.043147 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"ee3806b6-31c0-470c-8dcf-9f7e40a5929a","Type":"ContainerDied","Data":"ba536a5149227d9c101ca361365b92cc36ff7bd3ea6ed750d5846aa6e6665cb3"} Jan 21 00:25:12 crc kubenswrapper[5118]: I0121 00:25:12.316223 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/alertmanager-default-0"] Jan 21 00:25:12 crc kubenswrapper[5118]: I0121 00:25:12.505303 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/alertmanager-default-0"] Jan 21 00:25:12 crc kubenswrapper[5118]: I0121 00:25:12.505476 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/alertmanager-default-0" Jan 21 00:25:12 crc kubenswrapper[5118]: I0121 00:25:12.507367 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-default-generated\"" Jan 21 00:25:12 crc kubenswrapper[5118]: I0121 00:25:12.507959 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-default-cluster-tls-config\"" Jan 21 00:25:12 crc kubenswrapper[5118]: I0121 00:25:12.508036 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-stf-dockercfg-2xlrz\"" Jan 21 00:25:12 crc kubenswrapper[5118]: I0121 00:25:12.508106 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-default-tls-assets-0\"" Jan 21 00:25:12 crc kubenswrapper[5118]: I0121 00:25:12.508287 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-default-web-config\"" Jan 21 00:25:12 crc kubenswrapper[5118]: I0121 00:25:12.508795 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-alertmanager-proxy-tls\"" Jan 21 00:25:12 crc kubenswrapper[5118]: I0121 00:25:12.639898 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/7ce5576f-ce16-4a13-9cca-8fcc68e399e7-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"7ce5576f-ce16-4a13-9cca-8fcc68e399e7\") " pod="service-telemetry/alertmanager-default-0" Jan 21 00:25:12 crc kubenswrapper[5118]: I0121 00:25:12.639966 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7ce5576f-ce16-4a13-9cca-8fcc68e399e7-config-out\") pod \"alertmanager-default-0\" (UID: \"7ce5576f-ce16-4a13-9cca-8fcc68e399e7\") " pod="service-telemetry/alertmanager-default-0" Jan 21 00:25:12 crc kubenswrapper[5118]: I0121 00:25:12.640248 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/7ce5576f-ce16-4a13-9cca-8fcc68e399e7-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"7ce5576f-ce16-4a13-9cca-8fcc68e399e7\") " pod="service-telemetry/alertmanager-default-0" Jan 21 00:25:12 crc kubenswrapper[5118]: I0121 00:25:12.640537 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/7ce5576f-ce16-4a13-9cca-8fcc68e399e7-config-volume\") pod \"alertmanager-default-0\" (UID: \"7ce5576f-ce16-4a13-9cca-8fcc68e399e7\") " pod="service-telemetry/alertmanager-default-0" Jan 21 00:25:12 crc kubenswrapper[5118]: I0121 00:25:12.640621 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f16a621d-fe89-413f-9811-932dcc62b1a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f16a621d-fe89-413f-9811-932dcc62b1a8\") pod \"alertmanager-default-0\" (UID: \"7ce5576f-ce16-4a13-9cca-8fcc68e399e7\") " pod="service-telemetry/alertmanager-default-0" Jan 21 00:25:12 crc kubenswrapper[5118]: I0121 00:25:12.640657 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7ce5576f-ce16-4a13-9cca-8fcc68e399e7-tls-assets\") pod \"alertmanager-default-0\" (UID: \"7ce5576f-ce16-4a13-9cca-8fcc68e399e7\") " pod="service-telemetry/alertmanager-default-0" Jan 21 00:25:12 crc kubenswrapper[5118]: I0121 00:25:12.640707 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7ce5576f-ce16-4a13-9cca-8fcc68e399e7-web-config\") pod \"alertmanager-default-0\" (UID: \"7ce5576f-ce16-4a13-9cca-8fcc68e399e7\") " pod="service-telemetry/alertmanager-default-0" Jan 21 00:25:12 crc kubenswrapper[5118]: I0121 00:25:12.640830 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6s5l9\" (UniqueName: \"kubernetes.io/projected/7ce5576f-ce16-4a13-9cca-8fcc68e399e7-kube-api-access-6s5l9\") pod \"alertmanager-default-0\" (UID: \"7ce5576f-ce16-4a13-9cca-8fcc68e399e7\") " pod="service-telemetry/alertmanager-default-0" Jan 21 00:25:12 crc kubenswrapper[5118]: I0121 00:25:12.641017 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/7ce5576f-ce16-4a13-9cca-8fcc68e399e7-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"7ce5576f-ce16-4a13-9cca-8fcc68e399e7\") " pod="service-telemetry/alertmanager-default-0" Jan 21 00:25:12 crc kubenswrapper[5118]: I0121 00:25:12.741993 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-f16a621d-fe89-413f-9811-932dcc62b1a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f16a621d-fe89-413f-9811-932dcc62b1a8\") pod \"alertmanager-default-0\" (UID: \"7ce5576f-ce16-4a13-9cca-8fcc68e399e7\") " pod="service-telemetry/alertmanager-default-0" Jan 21 00:25:12 crc kubenswrapper[5118]: I0121 00:25:12.742053 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7ce5576f-ce16-4a13-9cca-8fcc68e399e7-tls-assets\") pod \"alertmanager-default-0\" (UID: \"7ce5576f-ce16-4a13-9cca-8fcc68e399e7\") " pod="service-telemetry/alertmanager-default-0" Jan 21 00:25:12 crc kubenswrapper[5118]: I0121 00:25:12.742090 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7ce5576f-ce16-4a13-9cca-8fcc68e399e7-web-config\") pod \"alertmanager-default-0\" (UID: \"7ce5576f-ce16-4a13-9cca-8fcc68e399e7\") " pod="service-telemetry/alertmanager-default-0" Jan 21 00:25:12 crc kubenswrapper[5118]: I0121 00:25:12.742130 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6s5l9\" (UniqueName: \"kubernetes.io/projected/7ce5576f-ce16-4a13-9cca-8fcc68e399e7-kube-api-access-6s5l9\") pod \"alertmanager-default-0\" (UID: \"7ce5576f-ce16-4a13-9cca-8fcc68e399e7\") " pod="service-telemetry/alertmanager-default-0" Jan 21 00:25:12 crc kubenswrapper[5118]: I0121 00:25:12.742205 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/7ce5576f-ce16-4a13-9cca-8fcc68e399e7-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"7ce5576f-ce16-4a13-9cca-8fcc68e399e7\") " pod="service-telemetry/alertmanager-default-0" Jan 21 00:25:12 crc kubenswrapper[5118]: I0121 00:25:12.742278 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/7ce5576f-ce16-4a13-9cca-8fcc68e399e7-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"7ce5576f-ce16-4a13-9cca-8fcc68e399e7\") " pod="service-telemetry/alertmanager-default-0" Jan 21 00:25:12 crc kubenswrapper[5118]: I0121 00:25:12.742309 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7ce5576f-ce16-4a13-9cca-8fcc68e399e7-config-out\") pod \"alertmanager-default-0\" (UID: \"7ce5576f-ce16-4a13-9cca-8fcc68e399e7\") " pod="service-telemetry/alertmanager-default-0" Jan 21 00:25:12 crc kubenswrapper[5118]: I0121 00:25:12.742358 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/7ce5576f-ce16-4a13-9cca-8fcc68e399e7-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"7ce5576f-ce16-4a13-9cca-8fcc68e399e7\") " pod="service-telemetry/alertmanager-default-0" Jan 21 00:25:12 crc kubenswrapper[5118]: I0121 00:25:12.742404 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/7ce5576f-ce16-4a13-9cca-8fcc68e399e7-config-volume\") pod \"alertmanager-default-0\" (UID: \"7ce5576f-ce16-4a13-9cca-8fcc68e399e7\") " pod="service-telemetry/alertmanager-default-0" Jan 21 00:25:12 crc kubenswrapper[5118]: E0121 00:25:12.743284 5118 secret.go:189] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Jan 21 00:25:12 crc kubenswrapper[5118]: E0121 00:25:12.743395 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7ce5576f-ce16-4a13-9cca-8fcc68e399e7-secret-default-alertmanager-proxy-tls podName:7ce5576f-ce16-4a13-9cca-8fcc68e399e7 nodeName:}" failed. No retries permitted until 2026-01-21 00:25:13.243352 +0000 UTC m=+968.567599018 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/7ce5576f-ce16-4a13-9cca-8fcc68e399e7-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "7ce5576f-ce16-4a13-9cca-8fcc68e399e7") : secret "default-alertmanager-proxy-tls" not found Jan 21 00:25:12 crc kubenswrapper[5118]: I0121 00:25:12.748796 5118 csi_attacher.go:373] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 21 00:25:12 crc kubenswrapper[5118]: I0121 00:25:12.748836 5118 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-f16a621d-fe89-413f-9811-932dcc62b1a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f16a621d-fe89-413f-9811-932dcc62b1a8\") pod \"alertmanager-default-0\" (UID: \"7ce5576f-ce16-4a13-9cca-8fcc68e399e7\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/c35858e41c1c117930057b053b6f165e03c05ce92a0750a033a05f0b846c90c7/globalmount\"" pod="service-telemetry/alertmanager-default-0" Jan 21 00:25:12 crc kubenswrapper[5118]: I0121 00:25:12.748887 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/7ce5576f-ce16-4a13-9cca-8fcc68e399e7-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"7ce5576f-ce16-4a13-9cca-8fcc68e399e7\") " pod="service-telemetry/alertmanager-default-0" Jan 21 00:25:12 crc kubenswrapper[5118]: I0121 00:25:12.749386 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7ce5576f-ce16-4a13-9cca-8fcc68e399e7-tls-assets\") pod \"alertmanager-default-0\" (UID: \"7ce5576f-ce16-4a13-9cca-8fcc68e399e7\") " pod="service-telemetry/alertmanager-default-0" Jan 21 00:25:12 crc kubenswrapper[5118]: I0121 00:25:12.749732 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7ce5576f-ce16-4a13-9cca-8fcc68e399e7-config-out\") pod \"alertmanager-default-0\" (UID: \"7ce5576f-ce16-4a13-9cca-8fcc68e399e7\") " pod="service-telemetry/alertmanager-default-0" Jan 21 00:25:12 crc kubenswrapper[5118]: I0121 00:25:12.749854 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/7ce5576f-ce16-4a13-9cca-8fcc68e399e7-config-volume\") pod \"alertmanager-default-0\" (UID: \"7ce5576f-ce16-4a13-9cca-8fcc68e399e7\") " pod="service-telemetry/alertmanager-default-0" Jan 21 00:25:12 crc kubenswrapper[5118]: I0121 00:25:12.757123 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7ce5576f-ce16-4a13-9cca-8fcc68e399e7-web-config\") pod \"alertmanager-default-0\" (UID: \"7ce5576f-ce16-4a13-9cca-8fcc68e399e7\") " pod="service-telemetry/alertmanager-default-0" Jan 21 00:25:12 crc kubenswrapper[5118]: I0121 00:25:12.757893 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/7ce5576f-ce16-4a13-9cca-8fcc68e399e7-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"7ce5576f-ce16-4a13-9cca-8fcc68e399e7\") " pod="service-telemetry/alertmanager-default-0" Jan 21 00:25:12 crc kubenswrapper[5118]: I0121 00:25:12.765501 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6s5l9\" (UniqueName: \"kubernetes.io/projected/7ce5576f-ce16-4a13-9cca-8fcc68e399e7-kube-api-access-6s5l9\") pod \"alertmanager-default-0\" (UID: \"7ce5576f-ce16-4a13-9cca-8fcc68e399e7\") " pod="service-telemetry/alertmanager-default-0" Jan 21 00:25:12 crc kubenswrapper[5118]: I0121 00:25:12.775566 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-f16a621d-fe89-413f-9811-932dcc62b1a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f16a621d-fe89-413f-9811-932dcc62b1a8\") pod \"alertmanager-default-0\" (UID: \"7ce5576f-ce16-4a13-9cca-8fcc68e399e7\") " pod="service-telemetry/alertmanager-default-0" Jan 21 00:25:13 crc kubenswrapper[5118]: I0121 00:25:13.253796 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/7ce5576f-ce16-4a13-9cca-8fcc68e399e7-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"7ce5576f-ce16-4a13-9cca-8fcc68e399e7\") " pod="service-telemetry/alertmanager-default-0" Jan 21 00:25:13 crc kubenswrapper[5118]: E0121 00:25:13.253971 5118 secret.go:189] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Jan 21 00:25:13 crc kubenswrapper[5118]: E0121 00:25:13.254251 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7ce5576f-ce16-4a13-9cca-8fcc68e399e7-secret-default-alertmanager-proxy-tls podName:7ce5576f-ce16-4a13-9cca-8fcc68e399e7 nodeName:}" failed. No retries permitted until 2026-01-21 00:25:14.254230569 +0000 UTC m=+969.578477587 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/7ce5576f-ce16-4a13-9cca-8fcc68e399e7-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "7ce5576f-ce16-4a13-9cca-8fcc68e399e7") : secret "default-alertmanager-proxy-tls" not found Jan 21 00:25:14 crc kubenswrapper[5118]: I0121 00:25:14.269477 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/7ce5576f-ce16-4a13-9cca-8fcc68e399e7-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"7ce5576f-ce16-4a13-9cca-8fcc68e399e7\") " pod="service-telemetry/alertmanager-default-0" Jan 21 00:25:14 crc kubenswrapper[5118]: E0121 00:25:14.269723 5118 secret.go:189] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Jan 21 00:25:14 crc kubenswrapper[5118]: E0121 00:25:14.269831 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7ce5576f-ce16-4a13-9cca-8fcc68e399e7-secret-default-alertmanager-proxy-tls podName:7ce5576f-ce16-4a13-9cca-8fcc68e399e7 nodeName:}" failed. No retries permitted until 2026-01-21 00:25:16.269805723 +0000 UTC m=+971.594052741 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/7ce5576f-ce16-4a13-9cca-8fcc68e399e7-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "7ce5576f-ce16-4a13-9cca-8fcc68e399e7") : secret "default-alertmanager-proxy-tls" not found Jan 21 00:25:16 crc kubenswrapper[5118]: I0121 00:25:16.297343 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/7ce5576f-ce16-4a13-9cca-8fcc68e399e7-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"7ce5576f-ce16-4a13-9cca-8fcc68e399e7\") " pod="service-telemetry/alertmanager-default-0" Jan 21 00:25:16 crc kubenswrapper[5118]: I0121 00:25:16.303272 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/7ce5576f-ce16-4a13-9cca-8fcc68e399e7-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"7ce5576f-ce16-4a13-9cca-8fcc68e399e7\") " pod="service-telemetry/alertmanager-default-0" Jan 21 00:25:16 crc kubenswrapper[5118]: I0121 00:25:16.434651 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/alertmanager-default-0" Jan 21 00:25:25 crc kubenswrapper[5118]: I0121 00:25:25.261342 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/alertmanager-default-0"] Jan 21 00:25:27 crc kubenswrapper[5118]: I0121 00:25:27.542402 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-879kb"] Jan 21 00:25:29 crc kubenswrapper[5118]: I0121 00:25:29.290675 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-879kb" Jan 21 00:25:29 crc kubenswrapper[5118]: I0121 00:25:29.295357 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-coll-meter-sg-core-configmap\"" Jan 21 00:25:29 crc kubenswrapper[5118]: I0121 00:25:29.296493 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"smart-gateway-session-secret\"" Jan 21 00:25:29 crc kubenswrapper[5118]: I0121 00:25:29.296816 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-cloud1-coll-meter-proxy-tls\"" Jan 21 00:25:29 crc kubenswrapper[5118]: I0121 00:25:29.296968 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"smart-gateway-dockercfg-q646p\"" Jan 21 00:25:29 crc kubenswrapper[5118]: I0121 00:25:29.299776 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-879kb"] Jan 21 00:25:29 crc kubenswrapper[5118]: I0121 00:25:29.401502 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7vgj\" (UniqueName: \"kubernetes.io/projected/24b1ba21-7cfb-4bdc-84e7-63e5bacee435-kube-api-access-j7vgj\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-879kb\" (UID: \"24b1ba21-7cfb-4bdc-84e7-63e5bacee435\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-879kb" Jan 21 00:25:29 crc kubenswrapper[5118]: I0121 00:25:29.401549 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/24b1ba21-7cfb-4bdc-84e7-63e5bacee435-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-879kb\" (UID: \"24b1ba21-7cfb-4bdc-84e7-63e5bacee435\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-879kb" Jan 21 00:25:29 crc kubenswrapper[5118]: I0121 00:25:29.401581 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/24b1ba21-7cfb-4bdc-84e7-63e5bacee435-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-879kb\" (UID: \"24b1ba21-7cfb-4bdc-84e7-63e5bacee435\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-879kb" Jan 21 00:25:29 crc kubenswrapper[5118]: I0121 00:25:29.401601 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/24b1ba21-7cfb-4bdc-84e7-63e5bacee435-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-879kb\" (UID: \"24b1ba21-7cfb-4bdc-84e7-63e5bacee435\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-879kb" Jan 21 00:25:29 crc kubenswrapper[5118]: I0121 00:25:29.401643 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/24b1ba21-7cfb-4bdc-84e7-63e5bacee435-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-879kb\" (UID: \"24b1ba21-7cfb-4bdc-84e7-63e5bacee435\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-879kb" Jan 21 00:25:29 crc kubenswrapper[5118]: I0121 00:25:29.503006 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-j7vgj\" (UniqueName: \"kubernetes.io/projected/24b1ba21-7cfb-4bdc-84e7-63e5bacee435-kube-api-access-j7vgj\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-879kb\" (UID: \"24b1ba21-7cfb-4bdc-84e7-63e5bacee435\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-879kb" Jan 21 00:25:29 crc kubenswrapper[5118]: I0121 00:25:29.503051 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/24b1ba21-7cfb-4bdc-84e7-63e5bacee435-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-879kb\" (UID: \"24b1ba21-7cfb-4bdc-84e7-63e5bacee435\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-879kb" Jan 21 00:25:29 crc kubenswrapper[5118]: I0121 00:25:29.503078 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/24b1ba21-7cfb-4bdc-84e7-63e5bacee435-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-879kb\" (UID: \"24b1ba21-7cfb-4bdc-84e7-63e5bacee435\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-879kb" Jan 21 00:25:29 crc kubenswrapper[5118]: I0121 00:25:29.503097 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/24b1ba21-7cfb-4bdc-84e7-63e5bacee435-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-879kb\" (UID: \"24b1ba21-7cfb-4bdc-84e7-63e5bacee435\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-879kb" Jan 21 00:25:29 crc kubenswrapper[5118]: I0121 00:25:29.503154 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/24b1ba21-7cfb-4bdc-84e7-63e5bacee435-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-879kb\" (UID: \"24b1ba21-7cfb-4bdc-84e7-63e5bacee435\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-879kb" Jan 21 00:25:29 crc kubenswrapper[5118]: I0121 00:25:29.503719 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/24b1ba21-7cfb-4bdc-84e7-63e5bacee435-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-879kb\" (UID: \"24b1ba21-7cfb-4bdc-84e7-63e5bacee435\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-879kb" Jan 21 00:25:29 crc kubenswrapper[5118]: I0121 00:25:29.504324 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/24b1ba21-7cfb-4bdc-84e7-63e5bacee435-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-879kb\" (UID: \"24b1ba21-7cfb-4bdc-84e7-63e5bacee435\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-879kb" Jan 21 00:25:29 crc kubenswrapper[5118]: I0121 00:25:29.510484 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/24b1ba21-7cfb-4bdc-84e7-63e5bacee435-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-879kb\" (UID: \"24b1ba21-7cfb-4bdc-84e7-63e5bacee435\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-879kb" Jan 21 00:25:29 crc kubenswrapper[5118]: I0121 00:25:29.511305 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/24b1ba21-7cfb-4bdc-84e7-63e5bacee435-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-879kb\" (UID: \"24b1ba21-7cfb-4bdc-84e7-63e5bacee435\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-879kb" Jan 21 00:25:29 crc kubenswrapper[5118]: I0121 00:25:29.520638 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7vgj\" (UniqueName: \"kubernetes.io/projected/24b1ba21-7cfb-4bdc-84e7-63e5bacee435-kube-api-access-j7vgj\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-879kb\" (UID: \"24b1ba21-7cfb-4bdc-84e7-63e5bacee435\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-879kb" Jan 21 00:25:29 crc kubenswrapper[5118]: I0121 00:25:29.577478 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-tp7hs"] Jan 21 00:25:29 crc kubenswrapper[5118]: I0121 00:25:29.607757 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-879kb" Jan 21 00:25:29 crc kubenswrapper[5118]: I0121 00:25:29.695179 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-tp7hs"] Jan 21 00:25:29 crc kubenswrapper[5118]: I0121 00:25:29.695390 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-tp7hs" Jan 21 00:25:29 crc kubenswrapper[5118]: I0121 00:25:29.698071 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-cloud1-ceil-meter-proxy-tls\"" Jan 21 00:25:29 crc kubenswrapper[5118]: I0121 00:25:29.698585 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-ceil-meter-sg-core-configmap\"" Jan 21 00:25:29 crc kubenswrapper[5118]: I0121 00:25:29.808450 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/26e412ff-6455-444c-a91c-350651a82800-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-tp7hs\" (UID: \"26e412ff-6455-444c-a91c-350651a82800\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-tp7hs" Jan 21 00:25:29 crc kubenswrapper[5118]: I0121 00:25:29.808912 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/26e412ff-6455-444c-a91c-350651a82800-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-tp7hs\" (UID: \"26e412ff-6455-444c-a91c-350651a82800\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-tp7hs" Jan 21 00:25:29 crc kubenswrapper[5118]: I0121 00:25:29.808984 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jhjm\" (UniqueName: \"kubernetes.io/projected/26e412ff-6455-444c-a91c-350651a82800-kube-api-access-7jhjm\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-tp7hs\" (UID: \"26e412ff-6455-444c-a91c-350651a82800\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-tp7hs" Jan 21 00:25:29 crc kubenswrapper[5118]: I0121 00:25:29.809034 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/26e412ff-6455-444c-a91c-350651a82800-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-tp7hs\" (UID: \"26e412ff-6455-444c-a91c-350651a82800\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-tp7hs" Jan 21 00:25:29 crc kubenswrapper[5118]: I0121 00:25:29.809077 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/26e412ff-6455-444c-a91c-350651a82800-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-tp7hs\" (UID: \"26e412ff-6455-444c-a91c-350651a82800\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-tp7hs" Jan 21 00:25:29 crc kubenswrapper[5118]: I0121 00:25:29.910623 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/26e412ff-6455-444c-a91c-350651a82800-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-tp7hs\" (UID: \"26e412ff-6455-444c-a91c-350651a82800\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-tp7hs" Jan 21 00:25:29 crc kubenswrapper[5118]: I0121 00:25:29.910688 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7jhjm\" (UniqueName: \"kubernetes.io/projected/26e412ff-6455-444c-a91c-350651a82800-kube-api-access-7jhjm\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-tp7hs\" (UID: \"26e412ff-6455-444c-a91c-350651a82800\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-tp7hs" Jan 21 00:25:29 crc kubenswrapper[5118]: I0121 00:25:29.910725 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/26e412ff-6455-444c-a91c-350651a82800-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-tp7hs\" (UID: \"26e412ff-6455-444c-a91c-350651a82800\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-tp7hs" Jan 21 00:25:29 crc kubenswrapper[5118]: I0121 00:25:29.910753 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/26e412ff-6455-444c-a91c-350651a82800-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-tp7hs\" (UID: \"26e412ff-6455-444c-a91c-350651a82800\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-tp7hs" Jan 21 00:25:29 crc kubenswrapper[5118]: I0121 00:25:29.910811 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/26e412ff-6455-444c-a91c-350651a82800-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-tp7hs\" (UID: \"26e412ff-6455-444c-a91c-350651a82800\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-tp7hs" Jan 21 00:25:29 crc kubenswrapper[5118]: I0121 00:25:29.911283 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/26e412ff-6455-444c-a91c-350651a82800-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-tp7hs\" (UID: \"26e412ff-6455-444c-a91c-350651a82800\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-tp7hs" Jan 21 00:25:29 crc kubenswrapper[5118]: E0121 00:25:29.911415 5118 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-ceil-meter-proxy-tls: secret "default-cloud1-ceil-meter-proxy-tls" not found Jan 21 00:25:29 crc kubenswrapper[5118]: E0121 00:25:29.911486 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/26e412ff-6455-444c-a91c-350651a82800-default-cloud1-ceil-meter-proxy-tls podName:26e412ff-6455-444c-a91c-350651a82800 nodeName:}" failed. No retries permitted until 2026-01-21 00:25:30.411464286 +0000 UTC m=+985.735711364 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-ceil-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/26e412ff-6455-444c-a91c-350651a82800-default-cloud1-ceil-meter-proxy-tls") pod "default-cloud1-ceil-meter-smartgateway-545b564d9f-tp7hs" (UID: "26e412ff-6455-444c-a91c-350651a82800") : secret "default-cloud1-ceil-meter-proxy-tls" not found Jan 21 00:25:29 crc kubenswrapper[5118]: I0121 00:25:29.912618 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/26e412ff-6455-444c-a91c-350651a82800-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-tp7hs\" (UID: \"26e412ff-6455-444c-a91c-350651a82800\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-tp7hs" Jan 21 00:25:29 crc kubenswrapper[5118]: I0121 00:25:29.923976 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/26e412ff-6455-444c-a91c-350651a82800-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-tp7hs\" (UID: \"26e412ff-6455-444c-a91c-350651a82800\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-tp7hs" Jan 21 00:25:29 crc kubenswrapper[5118]: I0121 00:25:29.928930 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jhjm\" (UniqueName: \"kubernetes.io/projected/26e412ff-6455-444c-a91c-350651a82800-kube-api-access-7jhjm\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-tp7hs\" (UID: \"26e412ff-6455-444c-a91c-350651a82800\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-tp7hs" Jan 21 00:25:30 crc kubenswrapper[5118]: I0121 00:25:30.420140 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/26e412ff-6455-444c-a91c-350651a82800-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-tp7hs\" (UID: \"26e412ff-6455-444c-a91c-350651a82800\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-tp7hs" Jan 21 00:25:30 crc kubenswrapper[5118]: E0121 00:25:30.420371 5118 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-ceil-meter-proxy-tls: secret "default-cloud1-ceil-meter-proxy-tls" not found Jan 21 00:25:30 crc kubenswrapper[5118]: E0121 00:25:30.420491 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/26e412ff-6455-444c-a91c-350651a82800-default-cloud1-ceil-meter-proxy-tls podName:26e412ff-6455-444c-a91c-350651a82800 nodeName:}" failed. No retries permitted until 2026-01-21 00:25:31.420464435 +0000 UTC m=+986.744711503 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-ceil-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/26e412ff-6455-444c-a91c-350651a82800-default-cloud1-ceil-meter-proxy-tls") pod "default-cloud1-ceil-meter-smartgateway-545b564d9f-tp7hs" (UID: "26e412ff-6455-444c-a91c-350651a82800") : secret "default-cloud1-ceil-meter-proxy-tls" not found Jan 21 00:25:31 crc kubenswrapper[5118]: I0121 00:25:31.434711 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/26e412ff-6455-444c-a91c-350651a82800-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-tp7hs\" (UID: \"26e412ff-6455-444c-a91c-350651a82800\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-tp7hs" Jan 21 00:25:31 crc kubenswrapper[5118]: I0121 00:25:31.445393 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/26e412ff-6455-444c-a91c-350651a82800-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-tp7hs\" (UID: \"26e412ff-6455-444c-a91c-350651a82800\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-tp7hs" Jan 21 00:25:31 crc kubenswrapper[5118]: I0121 00:25:31.521559 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-tp7hs" Jan 21 00:25:32 crc kubenswrapper[5118]: I0121 00:25:32.219370 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"7ce5576f-ce16-4a13-9cca-8fcc68e399e7","Type":"ContainerStarted","Data":"421cd4d17817ee2c9aa394507d92e28f5e6f6fc58f6ebf5fbfab0b0a8dac2360"} Jan 21 00:25:33 crc kubenswrapper[5118]: I0121 00:25:33.135231 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pjrqv"] Jan 21 00:25:33 crc kubenswrapper[5118]: I0121 00:25:33.161467 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pjrqv"] Jan 21 00:25:33 crc kubenswrapper[5118]: I0121 00:25:33.161590 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pjrqv" Jan 21 00:25:33 crc kubenswrapper[5118]: I0121 00:25:33.164044 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-cloud1-sens-meter-proxy-tls\"" Jan 21 00:25:33 crc kubenswrapper[5118]: I0121 00:25:33.165378 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-sens-meter-sg-core-configmap\"" Jan 21 00:25:33 crc kubenswrapper[5118]: I0121 00:25:33.259930 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/d85a2d17-ce06-446d-aabc-cc02486c78eb-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pjrqv\" (UID: \"d85a2d17-ce06-446d-aabc-cc02486c78eb\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pjrqv" Jan 21 00:25:33 crc kubenswrapper[5118]: I0121 00:25:33.259991 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/d85a2d17-ce06-446d-aabc-cc02486c78eb-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pjrqv\" (UID: \"d85a2d17-ce06-446d-aabc-cc02486c78eb\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pjrqv" Jan 21 00:25:33 crc kubenswrapper[5118]: I0121 00:25:33.260021 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6k6n\" (UniqueName: \"kubernetes.io/projected/d85a2d17-ce06-446d-aabc-cc02486c78eb-kube-api-access-n6k6n\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pjrqv\" (UID: \"d85a2d17-ce06-446d-aabc-cc02486c78eb\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pjrqv" Jan 21 00:25:33 crc kubenswrapper[5118]: I0121 00:25:33.260146 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/d85a2d17-ce06-446d-aabc-cc02486c78eb-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pjrqv\" (UID: \"d85a2d17-ce06-446d-aabc-cc02486c78eb\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pjrqv" Jan 21 00:25:33 crc kubenswrapper[5118]: I0121 00:25:33.260294 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/d85a2d17-ce06-446d-aabc-cc02486c78eb-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pjrqv\" (UID: \"d85a2d17-ce06-446d-aabc-cc02486c78eb\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pjrqv" Jan 21 00:25:33 crc kubenswrapper[5118]: I0121 00:25:33.362046 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/d85a2d17-ce06-446d-aabc-cc02486c78eb-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pjrqv\" (UID: \"d85a2d17-ce06-446d-aabc-cc02486c78eb\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pjrqv" Jan 21 00:25:33 crc kubenswrapper[5118]: I0121 00:25:33.362224 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/d85a2d17-ce06-446d-aabc-cc02486c78eb-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pjrqv\" (UID: \"d85a2d17-ce06-446d-aabc-cc02486c78eb\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pjrqv" Jan 21 00:25:33 crc kubenswrapper[5118]: I0121 00:25:33.362273 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-n6k6n\" (UniqueName: \"kubernetes.io/projected/d85a2d17-ce06-446d-aabc-cc02486c78eb-kube-api-access-n6k6n\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pjrqv\" (UID: \"d85a2d17-ce06-446d-aabc-cc02486c78eb\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pjrqv" Jan 21 00:25:33 crc kubenswrapper[5118]: I0121 00:25:33.362316 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/d85a2d17-ce06-446d-aabc-cc02486c78eb-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pjrqv\" (UID: \"d85a2d17-ce06-446d-aabc-cc02486c78eb\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pjrqv" Jan 21 00:25:33 crc kubenswrapper[5118]: I0121 00:25:33.362371 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/d85a2d17-ce06-446d-aabc-cc02486c78eb-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pjrqv\" (UID: \"d85a2d17-ce06-446d-aabc-cc02486c78eb\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pjrqv" Jan 21 00:25:33 crc kubenswrapper[5118]: E0121 00:25:33.362597 5118 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-sens-meter-proxy-tls: secret "default-cloud1-sens-meter-proxy-tls" not found Jan 21 00:25:33 crc kubenswrapper[5118]: E0121 00:25:33.362680 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d85a2d17-ce06-446d-aabc-cc02486c78eb-default-cloud1-sens-meter-proxy-tls podName:d85a2d17-ce06-446d-aabc-cc02486c78eb nodeName:}" failed. No retries permitted until 2026-01-21 00:25:33.862658046 +0000 UTC m=+989.186905064 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-sens-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/d85a2d17-ce06-446d-aabc-cc02486c78eb-default-cloud1-sens-meter-proxy-tls") pod "default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pjrqv" (UID: "d85a2d17-ce06-446d-aabc-cc02486c78eb") : secret "default-cloud1-sens-meter-proxy-tls" not found Jan 21 00:25:33 crc kubenswrapper[5118]: I0121 00:25:33.363103 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/d85a2d17-ce06-446d-aabc-cc02486c78eb-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pjrqv\" (UID: \"d85a2d17-ce06-446d-aabc-cc02486c78eb\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pjrqv" Jan 21 00:25:33 crc kubenswrapper[5118]: I0121 00:25:33.363626 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/d85a2d17-ce06-446d-aabc-cc02486c78eb-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pjrqv\" (UID: \"d85a2d17-ce06-446d-aabc-cc02486c78eb\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pjrqv" Jan 21 00:25:33 crc kubenswrapper[5118]: I0121 00:25:33.369900 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/d85a2d17-ce06-446d-aabc-cc02486c78eb-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pjrqv\" (UID: \"d85a2d17-ce06-446d-aabc-cc02486c78eb\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pjrqv" Jan 21 00:25:33 crc kubenswrapper[5118]: I0121 00:25:33.383469 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6k6n\" (UniqueName: \"kubernetes.io/projected/d85a2d17-ce06-446d-aabc-cc02486c78eb-kube-api-access-n6k6n\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pjrqv\" (UID: \"d85a2d17-ce06-446d-aabc-cc02486c78eb\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pjrqv" Jan 21 00:25:33 crc kubenswrapper[5118]: I0121 00:25:33.411673 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-879kb"] Jan 21 00:25:33 crc kubenswrapper[5118]: I0121 00:25:33.767593 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-tp7hs"] Jan 21 00:25:33 crc kubenswrapper[5118]: W0121 00:25:33.767944 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod26e412ff_6455_444c_a91c_350651a82800.slice/crio-ceac36ea14d1c46c6b20c8273c971096fa106bd1dabd77760db07b79ba1615c5 WatchSource:0}: Error finding container ceac36ea14d1c46c6b20c8273c971096fa106bd1dabd77760db07b79ba1615c5: Status 404 returned error can't find the container with id ceac36ea14d1c46c6b20c8273c971096fa106bd1dabd77760db07b79ba1615c5 Jan 21 00:25:33 crc kubenswrapper[5118]: I0121 00:25:33.871008 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/d85a2d17-ce06-446d-aabc-cc02486c78eb-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pjrqv\" (UID: \"d85a2d17-ce06-446d-aabc-cc02486c78eb\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pjrqv" Jan 21 00:25:33 crc kubenswrapper[5118]: E0121 00:25:33.871224 5118 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-sens-meter-proxy-tls: secret "default-cloud1-sens-meter-proxy-tls" not found Jan 21 00:25:33 crc kubenswrapper[5118]: E0121 00:25:33.871307 5118 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d85a2d17-ce06-446d-aabc-cc02486c78eb-default-cloud1-sens-meter-proxy-tls podName:d85a2d17-ce06-446d-aabc-cc02486c78eb nodeName:}" failed. No retries permitted until 2026-01-21 00:25:34.871285025 +0000 UTC m=+990.195532043 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-sens-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/d85a2d17-ce06-446d-aabc-cc02486c78eb-default-cloud1-sens-meter-proxy-tls") pod "default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pjrqv" (UID: "d85a2d17-ce06-446d-aabc-cc02486c78eb") : secret "default-cloud1-sens-meter-proxy-tls" not found Jan 21 00:25:34 crc kubenswrapper[5118]: I0121 00:25:34.251932 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-snmp-webhook-6774d8dfbc-t9sb4" event={"ID":"7209940f-514b-4189-beeb-fbd77f7e6a15","Type":"ContainerStarted","Data":"8e74b6d7630ef44b0e3c85c4ac3e240a6ae82de01652edb61a1eb1643ff27137"} Jan 21 00:25:34 crc kubenswrapper[5118]: I0121 00:25:34.253995 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-tp7hs" event={"ID":"26e412ff-6455-444c-a91c-350651a82800","Type":"ContainerStarted","Data":"ceac36ea14d1c46c6b20c8273c971096fa106bd1dabd77760db07b79ba1615c5"} Jan 21 00:25:34 crc kubenswrapper[5118]: I0121 00:25:34.256228 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"ee3806b6-31c0-470c-8dcf-9f7e40a5929a","Type":"ContainerStarted","Data":"ec3511aff79feae6005422d638caec3fadcfbca403677a9e39ef1ad84ab072f5"} Jan 21 00:25:34 crc kubenswrapper[5118]: I0121 00:25:34.257308 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-879kb" event={"ID":"24b1ba21-7cfb-4bdc-84e7-63e5bacee435","Type":"ContainerStarted","Data":"c56e9d97a494fdcb31778d1de084434b0953408d1df78c72820e14cfd6dd08c6"} Jan 21 00:25:34 crc kubenswrapper[5118]: I0121 00:25:34.271088 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-snmp-webhook-6774d8dfbc-t9sb4" podStartSLOduration=2.292288161 podStartE2EDuration="26.27106316s" podCreationTimestamp="2026-01-21 00:25:08 +0000 UTC" firstStartedPulling="2026-01-21 00:25:09.345368629 +0000 UTC m=+964.669615637" lastFinishedPulling="2026-01-21 00:25:33.324143618 +0000 UTC m=+988.648390636" observedRunningTime="2026-01-21 00:25:34.265872832 +0000 UTC m=+989.590119870" watchObservedRunningTime="2026-01-21 00:25:34.27106316 +0000 UTC m=+989.595310188" Jan 21 00:25:34 crc kubenswrapper[5118]: I0121 00:25:34.895714 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/d85a2d17-ce06-446d-aabc-cc02486c78eb-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pjrqv\" (UID: \"d85a2d17-ce06-446d-aabc-cc02486c78eb\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pjrqv" Jan 21 00:25:34 crc kubenswrapper[5118]: I0121 00:25:34.908133 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/d85a2d17-ce06-446d-aabc-cc02486c78eb-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pjrqv\" (UID: \"d85a2d17-ce06-446d-aabc-cc02486c78eb\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pjrqv" Jan 21 00:25:34 crc kubenswrapper[5118]: I0121 00:25:34.990409 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pjrqv" Jan 21 00:25:35 crc kubenswrapper[5118]: I0121 00:25:35.579262 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pjrqv"] Jan 21 00:25:36 crc kubenswrapper[5118]: I0121 00:25:36.276587 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"7ce5576f-ce16-4a13-9cca-8fcc68e399e7","Type":"ContainerStarted","Data":"229cf871ae4aa4097076027ee33806c3db8360924111c2c442add1b2ef63fd69"} Jan 21 00:25:36 crc kubenswrapper[5118]: I0121 00:25:36.279897 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"ee3806b6-31c0-470c-8dcf-9f7e40a5929a","Type":"ContainerStarted","Data":"723d5242be973ba102c1a7c4581a92e64dac40c852f18224fc0fa8b809df387e"} Jan 21 00:25:36 crc kubenswrapper[5118]: I0121 00:25:36.282470 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pjrqv" event={"ID":"d85a2d17-ce06-446d-aabc-cc02486c78eb","Type":"ContainerStarted","Data":"9f86274a5985636b231eb930449ae39583b1a2323dda22968e2aa6a19d252a42"} Jan 21 00:25:41 crc kubenswrapper[5118]: I0121 00:25:41.478768 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-7f9f666f58-z4s4h"] Jan 21 00:25:41 crc kubenswrapper[5118]: I0121 00:25:41.794757 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-7f9f666f58-z4s4h"] Jan 21 00:25:41 crc kubenswrapper[5118]: I0121 00:25:41.795000 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-event-smartgateway-7f9f666f58-z4s4h" Jan 21 00:25:41 crc kubenswrapper[5118]: I0121 00:25:41.800898 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-coll-event-sg-core-configmap\"" Jan 21 00:25:41 crc kubenswrapper[5118]: I0121 00:25:41.801486 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-cert\"" Jan 21 00:25:41 crc kubenswrapper[5118]: I0121 00:25:41.892495 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/acab55a4-1334-4e6a-9160-0693a38de48d-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-7f9f666f58-z4s4h\" (UID: \"acab55a4-1334-4e6a-9160-0693a38de48d\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-7f9f666f58-z4s4h" Jan 21 00:25:41 crc kubenswrapper[5118]: I0121 00:25:41.892684 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/acab55a4-1334-4e6a-9160-0693a38de48d-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-7f9f666f58-z4s4h\" (UID: \"acab55a4-1334-4e6a-9160-0693a38de48d\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-7f9f666f58-z4s4h" Jan 21 00:25:41 crc kubenswrapper[5118]: I0121 00:25:41.892733 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nw2jb\" (UniqueName: \"kubernetes.io/projected/acab55a4-1334-4e6a-9160-0693a38de48d-kube-api-access-nw2jb\") pod \"default-cloud1-coll-event-smartgateway-7f9f666f58-z4s4h\" (UID: \"acab55a4-1334-4e6a-9160-0693a38de48d\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-7f9f666f58-z4s4h" Jan 21 00:25:41 crc kubenswrapper[5118]: I0121 00:25:41.892771 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/acab55a4-1334-4e6a-9160-0693a38de48d-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-7f9f666f58-z4s4h\" (UID: \"acab55a4-1334-4e6a-9160-0693a38de48d\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-7f9f666f58-z4s4h" Jan 21 00:25:41 crc kubenswrapper[5118]: I0121 00:25:41.993738 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/acab55a4-1334-4e6a-9160-0693a38de48d-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-7f9f666f58-z4s4h\" (UID: \"acab55a4-1334-4e6a-9160-0693a38de48d\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-7f9f666f58-z4s4h" Jan 21 00:25:41 crc kubenswrapper[5118]: I0121 00:25:41.993834 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/acab55a4-1334-4e6a-9160-0693a38de48d-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-7f9f666f58-z4s4h\" (UID: \"acab55a4-1334-4e6a-9160-0693a38de48d\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-7f9f666f58-z4s4h" Jan 21 00:25:41 crc kubenswrapper[5118]: I0121 00:25:41.994024 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/acab55a4-1334-4e6a-9160-0693a38de48d-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-7f9f666f58-z4s4h\" (UID: \"acab55a4-1334-4e6a-9160-0693a38de48d\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-7f9f666f58-z4s4h" Jan 21 00:25:41 crc kubenswrapper[5118]: I0121 00:25:41.994088 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nw2jb\" (UniqueName: \"kubernetes.io/projected/acab55a4-1334-4e6a-9160-0693a38de48d-kube-api-access-nw2jb\") pod \"default-cloud1-coll-event-smartgateway-7f9f666f58-z4s4h\" (UID: \"acab55a4-1334-4e6a-9160-0693a38de48d\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-7f9f666f58-z4s4h" Jan 21 00:25:41 crc kubenswrapper[5118]: I0121 00:25:41.995056 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/acab55a4-1334-4e6a-9160-0693a38de48d-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-7f9f666f58-z4s4h\" (UID: \"acab55a4-1334-4e6a-9160-0693a38de48d\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-7f9f666f58-z4s4h" Jan 21 00:25:41 crc kubenswrapper[5118]: I0121 00:25:41.995343 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/acab55a4-1334-4e6a-9160-0693a38de48d-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-7f9f666f58-z4s4h\" (UID: \"acab55a4-1334-4e6a-9160-0693a38de48d\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-7f9f666f58-z4s4h" Jan 21 00:25:42 crc kubenswrapper[5118]: I0121 00:25:42.012295 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/acab55a4-1334-4e6a-9160-0693a38de48d-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-7f9f666f58-z4s4h\" (UID: \"acab55a4-1334-4e6a-9160-0693a38de48d\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-7f9f666f58-z4s4h" Jan 21 00:25:42 crc kubenswrapper[5118]: I0121 00:25:42.014817 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nw2jb\" (UniqueName: \"kubernetes.io/projected/acab55a4-1334-4e6a-9160-0693a38de48d-kube-api-access-nw2jb\") pod \"default-cloud1-coll-event-smartgateway-7f9f666f58-z4s4h\" (UID: \"acab55a4-1334-4e6a-9160-0693a38de48d\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-7f9f666f58-z4s4h" Jan 21 00:25:42 crc kubenswrapper[5118]: I0121 00:25:42.185600 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-event-smartgateway-7f9f666f58-z4s4h" Jan 21 00:25:42 crc kubenswrapper[5118]: I0121 00:25:42.323492 5118 generic.go:358] "Generic (PLEG): container finished" podID="7ce5576f-ce16-4a13-9cca-8fcc68e399e7" containerID="229cf871ae4aa4097076027ee33806c3db8360924111c2c442add1b2ef63fd69" exitCode=0 Jan 21 00:25:42 crc kubenswrapper[5118]: I0121 00:25:42.323671 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"7ce5576f-ce16-4a13-9cca-8fcc68e399e7","Type":"ContainerDied","Data":"229cf871ae4aa4097076027ee33806c3db8360924111c2c442add1b2ef63fd69"} Jan 21 00:25:42 crc kubenswrapper[5118]: I0121 00:25:42.563245 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-98d47fbd5-w8mnl"] Jan 21 00:25:42 crc kubenswrapper[5118]: I0121 00:25:42.668601 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-98d47fbd5-w8mnl"] Jan 21 00:25:42 crc kubenswrapper[5118]: I0121 00:25:42.668735 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-98d47fbd5-w8mnl" Jan 21 00:25:42 crc kubenswrapper[5118]: I0121 00:25:42.671699 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-ceil-event-sg-core-configmap\"" Jan 21 00:25:42 crc kubenswrapper[5118]: I0121 00:25:42.804725 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/1eae1c63-1b25-4f53-83e6-cc1fcd7cf325-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-98d47fbd5-w8mnl\" (UID: \"1eae1c63-1b25-4f53-83e6-cc1fcd7cf325\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-98d47fbd5-w8mnl" Jan 21 00:25:42 crc kubenswrapper[5118]: I0121 00:25:42.804945 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/1eae1c63-1b25-4f53-83e6-cc1fcd7cf325-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-98d47fbd5-w8mnl\" (UID: \"1eae1c63-1b25-4f53-83e6-cc1fcd7cf325\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-98d47fbd5-w8mnl" Jan 21 00:25:42 crc kubenswrapper[5118]: I0121 00:25:42.805144 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9w4p\" (UniqueName: \"kubernetes.io/projected/1eae1c63-1b25-4f53-83e6-cc1fcd7cf325-kube-api-access-m9w4p\") pod \"default-cloud1-ceil-event-smartgateway-98d47fbd5-w8mnl\" (UID: \"1eae1c63-1b25-4f53-83e6-cc1fcd7cf325\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-98d47fbd5-w8mnl" Jan 21 00:25:42 crc kubenswrapper[5118]: I0121 00:25:42.805266 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/1eae1c63-1b25-4f53-83e6-cc1fcd7cf325-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-98d47fbd5-w8mnl\" (UID: \"1eae1c63-1b25-4f53-83e6-cc1fcd7cf325\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-98d47fbd5-w8mnl" Jan 21 00:25:42 crc kubenswrapper[5118]: I0121 00:25:42.906622 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/1eae1c63-1b25-4f53-83e6-cc1fcd7cf325-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-98d47fbd5-w8mnl\" (UID: \"1eae1c63-1b25-4f53-83e6-cc1fcd7cf325\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-98d47fbd5-w8mnl" Jan 21 00:25:42 crc kubenswrapper[5118]: I0121 00:25:42.906731 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/1eae1c63-1b25-4f53-83e6-cc1fcd7cf325-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-98d47fbd5-w8mnl\" (UID: \"1eae1c63-1b25-4f53-83e6-cc1fcd7cf325\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-98d47fbd5-w8mnl" Jan 21 00:25:42 crc kubenswrapper[5118]: I0121 00:25:42.906763 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/1eae1c63-1b25-4f53-83e6-cc1fcd7cf325-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-98d47fbd5-w8mnl\" (UID: \"1eae1c63-1b25-4f53-83e6-cc1fcd7cf325\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-98d47fbd5-w8mnl" Jan 21 00:25:42 crc kubenswrapper[5118]: I0121 00:25:42.906795 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-m9w4p\" (UniqueName: \"kubernetes.io/projected/1eae1c63-1b25-4f53-83e6-cc1fcd7cf325-kube-api-access-m9w4p\") pod \"default-cloud1-ceil-event-smartgateway-98d47fbd5-w8mnl\" (UID: \"1eae1c63-1b25-4f53-83e6-cc1fcd7cf325\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-98d47fbd5-w8mnl" Jan 21 00:25:42 crc kubenswrapper[5118]: I0121 00:25:42.907248 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/1eae1c63-1b25-4f53-83e6-cc1fcd7cf325-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-98d47fbd5-w8mnl\" (UID: \"1eae1c63-1b25-4f53-83e6-cc1fcd7cf325\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-98d47fbd5-w8mnl" Jan 21 00:25:42 crc kubenswrapper[5118]: I0121 00:25:42.907559 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/1eae1c63-1b25-4f53-83e6-cc1fcd7cf325-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-98d47fbd5-w8mnl\" (UID: \"1eae1c63-1b25-4f53-83e6-cc1fcd7cf325\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-98d47fbd5-w8mnl" Jan 21 00:25:42 crc kubenswrapper[5118]: I0121 00:25:42.915005 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/1eae1c63-1b25-4f53-83e6-cc1fcd7cf325-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-98d47fbd5-w8mnl\" (UID: \"1eae1c63-1b25-4f53-83e6-cc1fcd7cf325\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-98d47fbd5-w8mnl" Jan 21 00:25:42 crc kubenswrapper[5118]: I0121 00:25:42.924551 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9w4p\" (UniqueName: \"kubernetes.io/projected/1eae1c63-1b25-4f53-83e6-cc1fcd7cf325-kube-api-access-m9w4p\") pod \"default-cloud1-ceil-event-smartgateway-98d47fbd5-w8mnl\" (UID: \"1eae1c63-1b25-4f53-83e6-cc1fcd7cf325\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-98d47fbd5-w8mnl" Jan 21 00:25:42 crc kubenswrapper[5118]: I0121 00:25:42.993356 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-98d47fbd5-w8mnl" Jan 21 00:25:44 crc kubenswrapper[5118]: I0121 00:25:44.911984 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-7f9f666f58-z4s4h"] Jan 21 00:25:45 crc kubenswrapper[5118]: I0121 00:25:45.000849 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-98d47fbd5-w8mnl"] Jan 21 00:25:45 crc kubenswrapper[5118]: I0121 00:25:45.347696 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-tp7hs" event={"ID":"26e412ff-6455-444c-a91c-350651a82800","Type":"ContainerStarted","Data":"8738b3a66cbc74e468e3b204cb7310dde809f3643cfb6ae453e6104774705234"} Jan 21 00:25:45 crc kubenswrapper[5118]: I0121 00:25:45.349826 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-98d47fbd5-w8mnl" event={"ID":"1eae1c63-1b25-4f53-83e6-cc1fcd7cf325","Type":"ContainerStarted","Data":"99e773358001d9f7ce292c175366fcc169452badba9c1534d0209db9614cfb19"} Jan 21 00:25:45 crc kubenswrapper[5118]: I0121 00:25:45.353683 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"ee3806b6-31c0-470c-8dcf-9f7e40a5929a","Type":"ContainerStarted","Data":"1e3e432b35c14aa21450986396726907343119720fbaeaa27183e51ab407524d"} Jan 21 00:25:45 crc kubenswrapper[5118]: I0121 00:25:45.357125 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pjrqv" event={"ID":"d85a2d17-ce06-446d-aabc-cc02486c78eb","Type":"ContainerStarted","Data":"f2d4c1d281fc251da2dd378f214bf7ca0adc5f9277970df59b4cfa97ac8c7c20"} Jan 21 00:25:45 crc kubenswrapper[5118]: I0121 00:25:45.360674 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-879kb" event={"ID":"24b1ba21-7cfb-4bdc-84e7-63e5bacee435","Type":"ContainerStarted","Data":"e2b21fe69a163cf0d2ee3adef2f25c7463ef6111f457ad4d7e9518ab3b90b164"} Jan 21 00:25:45 crc kubenswrapper[5118]: I0121 00:25:45.362059 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-7f9f666f58-z4s4h" event={"ID":"acab55a4-1334-4e6a-9160-0693a38de48d","Type":"ContainerStarted","Data":"8a5e87e1fccf236243ffb603af18266106abc23e2e12f2e0a3878415890938c9"} Jan 21 00:25:45 crc kubenswrapper[5118]: I0121 00:25:45.380221 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/prometheus-default-0" podStartSLOduration=5.046829273 podStartE2EDuration="49.380201558s" podCreationTimestamp="2026-01-21 00:24:56 +0000 UTC" firstStartedPulling="2026-01-21 00:25:00.361418288 +0000 UTC m=+955.685665306" lastFinishedPulling="2026-01-21 00:25:44.694790573 +0000 UTC m=+1000.019037591" observedRunningTime="2026-01-21 00:25:45.377728842 +0000 UTC m=+1000.701975870" watchObservedRunningTime="2026-01-21 00:25:45.380201558 +0000 UTC m=+1000.704448586" Jan 21 00:25:50 crc kubenswrapper[5118]: I0121 00:25:50.126775 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="service-telemetry/prometheus-default-0" Jan 21 00:25:51 crc kubenswrapper[5118]: I0121 00:25:51.410202 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"7ce5576f-ce16-4a13-9cca-8fcc68e399e7","Type":"ContainerStarted","Data":"2c2517ff7c93f633c6b920cab1a86217f87a3a19c7371766a07d65b2322edfda"} Jan 21 00:25:55 crc kubenswrapper[5118]: I0121 00:25:55.444096 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"7ce5576f-ce16-4a13-9cca-8fcc68e399e7","Type":"ContainerStarted","Data":"b6f5a6439d79a35d9ec21ad9367242532da095a1c202f12cef746c6ea15cde14"} Jan 21 00:25:56 crc kubenswrapper[5118]: I0121 00:25:56.357429 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-8wgmh"] Jan 21 00:25:56 crc kubenswrapper[5118]: I0121 00:25:56.357670 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/default-interconnect-55bf8d5cb-8wgmh" podUID="c7620811-1f39-4605-90d6-9b447532937f" containerName="default-interconnect" containerID="cri-o://f1fc1959422e725ed561d62b5a5294dd7d8683eaf6ebe4655d0f9145228a6273" gracePeriod=30 Jan 21 00:25:57 crc kubenswrapper[5118]: I0121 00:25:57.479772 5118 generic.go:358] "Generic (PLEG): container finished" podID="c7620811-1f39-4605-90d6-9b447532937f" containerID="f1fc1959422e725ed561d62b5a5294dd7d8683eaf6ebe4655d0f9145228a6273" exitCode=0 Jan 21 00:25:57 crc kubenswrapper[5118]: I0121 00:25:57.479888 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-8wgmh" event={"ID":"c7620811-1f39-4605-90d6-9b447532937f","Type":"ContainerDied","Data":"f1fc1959422e725ed561d62b5a5294dd7d8683eaf6ebe4655d0f9145228a6273"} Jan 21 00:25:57 crc kubenswrapper[5118]: I0121 00:25:57.746315 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-8wgmh" Jan 21 00:25:57 crc kubenswrapper[5118]: I0121 00:25:57.800378 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-xprn5"] Jan 21 00:25:57 crc kubenswrapper[5118]: I0121 00:25:57.801275 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c7620811-1f39-4605-90d6-9b447532937f" containerName="default-interconnect" Jan 21 00:25:57 crc kubenswrapper[5118]: I0121 00:25:57.801300 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7620811-1f39-4605-90d6-9b447532937f" containerName="default-interconnect" Jan 21 00:25:57 crc kubenswrapper[5118]: I0121 00:25:57.801453 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="c7620811-1f39-4605-90d6-9b447532937f" containerName="default-interconnect" Jan 21 00:25:57 crc kubenswrapper[5118]: I0121 00:25:57.808296 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-xprn5"] Jan 21 00:25:57 crc kubenswrapper[5118]: I0121 00:25:57.808469 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-xprn5" Jan 21 00:25:57 crc kubenswrapper[5118]: I0121 00:25:57.855785 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/c7620811-1f39-4605-90d6-9b447532937f-sasl-users\") pod \"c7620811-1f39-4605-90d6-9b447532937f\" (UID: \"c7620811-1f39-4605-90d6-9b447532937f\") " Jan 21 00:25:57 crc kubenswrapper[5118]: I0121 00:25:57.855889 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/c7620811-1f39-4605-90d6-9b447532937f-sasl-config\") pod \"c7620811-1f39-4605-90d6-9b447532937f\" (UID: \"c7620811-1f39-4605-90d6-9b447532937f\") " Jan 21 00:25:57 crc kubenswrapper[5118]: I0121 00:25:57.855955 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/c7620811-1f39-4605-90d6-9b447532937f-default-interconnect-inter-router-credentials\") pod \"c7620811-1f39-4605-90d6-9b447532937f\" (UID: \"c7620811-1f39-4605-90d6-9b447532937f\") " Jan 21 00:25:57 crc kubenswrapper[5118]: I0121 00:25:57.855982 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/c7620811-1f39-4605-90d6-9b447532937f-default-interconnect-openstack-ca\") pod \"c7620811-1f39-4605-90d6-9b447532937f\" (UID: \"c7620811-1f39-4605-90d6-9b447532937f\") " Jan 21 00:25:57 crc kubenswrapper[5118]: I0121 00:25:57.856054 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/c7620811-1f39-4605-90d6-9b447532937f-default-interconnect-inter-router-ca\") pod \"c7620811-1f39-4605-90d6-9b447532937f\" (UID: \"c7620811-1f39-4605-90d6-9b447532937f\") " Jan 21 00:25:57 crc kubenswrapper[5118]: I0121 00:25:57.856103 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/c7620811-1f39-4605-90d6-9b447532937f-default-interconnect-openstack-credentials\") pod \"c7620811-1f39-4605-90d6-9b447532937f\" (UID: \"c7620811-1f39-4605-90d6-9b447532937f\") " Jan 21 00:25:57 crc kubenswrapper[5118]: I0121 00:25:57.856133 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26hjq\" (UniqueName: \"kubernetes.io/projected/c7620811-1f39-4605-90d6-9b447532937f-kube-api-access-26hjq\") pod \"c7620811-1f39-4605-90d6-9b447532937f\" (UID: \"c7620811-1f39-4605-90d6-9b447532937f\") " Jan 21 00:25:57 crc kubenswrapper[5118]: I0121 00:25:57.856291 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/a46b6cc9-3c89-49ba-8687-5e3eecdee283-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-xprn5\" (UID: \"a46b6cc9-3c89-49ba-8687-5e3eecdee283\") " pod="service-telemetry/default-interconnect-55bf8d5cb-xprn5" Jan 21 00:25:57 crc kubenswrapper[5118]: I0121 00:25:57.856327 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/a46b6cc9-3c89-49ba-8687-5e3eecdee283-sasl-users\") pod \"default-interconnect-55bf8d5cb-xprn5\" (UID: \"a46b6cc9-3c89-49ba-8687-5e3eecdee283\") " pod="service-telemetry/default-interconnect-55bf8d5cb-xprn5" Jan 21 00:25:57 crc kubenswrapper[5118]: I0121 00:25:57.856400 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8gnc\" (UniqueName: \"kubernetes.io/projected/a46b6cc9-3c89-49ba-8687-5e3eecdee283-kube-api-access-k8gnc\") pod \"default-interconnect-55bf8d5cb-xprn5\" (UID: \"a46b6cc9-3c89-49ba-8687-5e3eecdee283\") " pod="service-telemetry/default-interconnect-55bf8d5cb-xprn5" Jan 21 00:25:57 crc kubenswrapper[5118]: I0121 00:25:57.856429 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/a46b6cc9-3c89-49ba-8687-5e3eecdee283-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-xprn5\" (UID: \"a46b6cc9-3c89-49ba-8687-5e3eecdee283\") " pod="service-telemetry/default-interconnect-55bf8d5cb-xprn5" Jan 21 00:25:57 crc kubenswrapper[5118]: I0121 00:25:57.856488 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/a46b6cc9-3c89-49ba-8687-5e3eecdee283-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-xprn5\" (UID: \"a46b6cc9-3c89-49ba-8687-5e3eecdee283\") " pod="service-telemetry/default-interconnect-55bf8d5cb-xprn5" Jan 21 00:25:57 crc kubenswrapper[5118]: I0121 00:25:57.856547 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/a46b6cc9-3c89-49ba-8687-5e3eecdee283-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-xprn5\" (UID: \"a46b6cc9-3c89-49ba-8687-5e3eecdee283\") " pod="service-telemetry/default-interconnect-55bf8d5cb-xprn5" Jan 21 00:25:57 crc kubenswrapper[5118]: I0121 00:25:57.856583 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/a46b6cc9-3c89-49ba-8687-5e3eecdee283-sasl-config\") pod \"default-interconnect-55bf8d5cb-xprn5\" (UID: \"a46b6cc9-3c89-49ba-8687-5e3eecdee283\") " pod="service-telemetry/default-interconnect-55bf8d5cb-xprn5" Jan 21 00:25:57 crc kubenswrapper[5118]: I0121 00:25:57.857638 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7620811-1f39-4605-90d6-9b447532937f-sasl-config" (OuterVolumeSpecName: "sasl-config") pod "c7620811-1f39-4605-90d6-9b447532937f" (UID: "c7620811-1f39-4605-90d6-9b447532937f"). InnerVolumeSpecName "sasl-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 00:25:57 crc kubenswrapper[5118]: I0121 00:25:57.863279 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7620811-1f39-4605-90d6-9b447532937f-default-interconnect-openstack-ca" (OuterVolumeSpecName: "default-interconnect-openstack-ca") pod "c7620811-1f39-4605-90d6-9b447532937f" (UID: "c7620811-1f39-4605-90d6-9b447532937f"). InnerVolumeSpecName "default-interconnect-openstack-ca". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 00:25:57 crc kubenswrapper[5118]: I0121 00:25:57.863392 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7620811-1f39-4605-90d6-9b447532937f-kube-api-access-26hjq" (OuterVolumeSpecName: "kube-api-access-26hjq") pod "c7620811-1f39-4605-90d6-9b447532937f" (UID: "c7620811-1f39-4605-90d6-9b447532937f"). InnerVolumeSpecName "kube-api-access-26hjq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:25:57 crc kubenswrapper[5118]: I0121 00:25:57.867570 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7620811-1f39-4605-90d6-9b447532937f-default-interconnect-inter-router-ca" (OuterVolumeSpecName: "default-interconnect-inter-router-ca") pod "c7620811-1f39-4605-90d6-9b447532937f" (UID: "c7620811-1f39-4605-90d6-9b447532937f"). InnerVolumeSpecName "default-interconnect-inter-router-ca". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 00:25:57 crc kubenswrapper[5118]: I0121 00:25:57.867767 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7620811-1f39-4605-90d6-9b447532937f-sasl-users" (OuterVolumeSpecName: "sasl-users") pod "c7620811-1f39-4605-90d6-9b447532937f" (UID: "c7620811-1f39-4605-90d6-9b447532937f"). InnerVolumeSpecName "sasl-users". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 00:25:57 crc kubenswrapper[5118]: I0121 00:25:57.870282 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7620811-1f39-4605-90d6-9b447532937f-default-interconnect-inter-router-credentials" (OuterVolumeSpecName: "default-interconnect-inter-router-credentials") pod "c7620811-1f39-4605-90d6-9b447532937f" (UID: "c7620811-1f39-4605-90d6-9b447532937f"). InnerVolumeSpecName "default-interconnect-inter-router-credentials". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 00:25:57 crc kubenswrapper[5118]: I0121 00:25:57.875361 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7620811-1f39-4605-90d6-9b447532937f-default-interconnect-openstack-credentials" (OuterVolumeSpecName: "default-interconnect-openstack-credentials") pod "c7620811-1f39-4605-90d6-9b447532937f" (UID: "c7620811-1f39-4605-90d6-9b447532937f"). InnerVolumeSpecName "default-interconnect-openstack-credentials". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 00:25:57 crc kubenswrapper[5118]: I0121 00:25:57.960341 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/a46b6cc9-3c89-49ba-8687-5e3eecdee283-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-xprn5\" (UID: \"a46b6cc9-3c89-49ba-8687-5e3eecdee283\") " pod="service-telemetry/default-interconnect-55bf8d5cb-xprn5" Jan 21 00:25:57 crc kubenswrapper[5118]: I0121 00:25:57.960410 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/a46b6cc9-3c89-49ba-8687-5e3eecdee283-sasl-config\") pod \"default-interconnect-55bf8d5cb-xprn5\" (UID: \"a46b6cc9-3c89-49ba-8687-5e3eecdee283\") " pod="service-telemetry/default-interconnect-55bf8d5cb-xprn5" Jan 21 00:25:57 crc kubenswrapper[5118]: I0121 00:25:57.960468 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/a46b6cc9-3c89-49ba-8687-5e3eecdee283-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-xprn5\" (UID: \"a46b6cc9-3c89-49ba-8687-5e3eecdee283\") " pod="service-telemetry/default-interconnect-55bf8d5cb-xprn5" Jan 21 00:25:57 crc kubenswrapper[5118]: I0121 00:25:57.960512 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/a46b6cc9-3c89-49ba-8687-5e3eecdee283-sasl-users\") pod \"default-interconnect-55bf8d5cb-xprn5\" (UID: \"a46b6cc9-3c89-49ba-8687-5e3eecdee283\") " pod="service-telemetry/default-interconnect-55bf8d5cb-xprn5" Jan 21 00:25:57 crc kubenswrapper[5118]: I0121 00:25:57.960599 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-k8gnc\" (UniqueName: \"kubernetes.io/projected/a46b6cc9-3c89-49ba-8687-5e3eecdee283-kube-api-access-k8gnc\") pod \"default-interconnect-55bf8d5cb-xprn5\" (UID: \"a46b6cc9-3c89-49ba-8687-5e3eecdee283\") " pod="service-telemetry/default-interconnect-55bf8d5cb-xprn5" Jan 21 00:25:57 crc kubenswrapper[5118]: I0121 00:25:57.960632 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/a46b6cc9-3c89-49ba-8687-5e3eecdee283-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-xprn5\" (UID: \"a46b6cc9-3c89-49ba-8687-5e3eecdee283\") " pod="service-telemetry/default-interconnect-55bf8d5cb-xprn5" Jan 21 00:25:57 crc kubenswrapper[5118]: I0121 00:25:57.960705 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/a46b6cc9-3c89-49ba-8687-5e3eecdee283-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-xprn5\" (UID: \"a46b6cc9-3c89-49ba-8687-5e3eecdee283\") " pod="service-telemetry/default-interconnect-55bf8d5cb-xprn5" Jan 21 00:25:57 crc kubenswrapper[5118]: I0121 00:25:57.960818 5118 reconciler_common.go:299] "Volume detached for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/c7620811-1f39-4605-90d6-9b447532937f-default-interconnect-inter-router-ca\") on node \"crc\" DevicePath \"\"" Jan 21 00:25:57 crc kubenswrapper[5118]: I0121 00:25:57.960840 5118 reconciler_common.go:299] "Volume detached for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/c7620811-1f39-4605-90d6-9b447532937f-default-interconnect-openstack-credentials\") on node \"crc\" DevicePath \"\"" Jan 21 00:25:57 crc kubenswrapper[5118]: I0121 00:25:57.960853 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-26hjq\" (UniqueName: \"kubernetes.io/projected/c7620811-1f39-4605-90d6-9b447532937f-kube-api-access-26hjq\") on node \"crc\" DevicePath \"\"" Jan 21 00:25:57 crc kubenswrapper[5118]: I0121 00:25:57.960864 5118 reconciler_common.go:299] "Volume detached for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/c7620811-1f39-4605-90d6-9b447532937f-sasl-users\") on node \"crc\" DevicePath \"\"" Jan 21 00:25:57 crc kubenswrapper[5118]: I0121 00:25:57.960879 5118 reconciler_common.go:299] "Volume detached for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/c7620811-1f39-4605-90d6-9b447532937f-sasl-config\") on node \"crc\" DevicePath \"\"" Jan 21 00:25:57 crc kubenswrapper[5118]: I0121 00:25:57.960891 5118 reconciler_common.go:299] "Volume detached for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/c7620811-1f39-4605-90d6-9b447532937f-default-interconnect-inter-router-credentials\") on node \"crc\" DevicePath \"\"" Jan 21 00:25:57 crc kubenswrapper[5118]: I0121 00:25:57.960903 5118 reconciler_common.go:299] "Volume detached for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/c7620811-1f39-4605-90d6-9b447532937f-default-interconnect-openstack-ca\") on node \"crc\" DevicePath \"\"" Jan 21 00:25:57 crc kubenswrapper[5118]: I0121 00:25:57.963985 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/a46b6cc9-3c89-49ba-8687-5e3eecdee283-sasl-config\") pod \"default-interconnect-55bf8d5cb-xprn5\" (UID: \"a46b6cc9-3c89-49ba-8687-5e3eecdee283\") " pod="service-telemetry/default-interconnect-55bf8d5cb-xprn5" Jan 21 00:25:57 crc kubenswrapper[5118]: I0121 00:25:57.967625 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/a46b6cc9-3c89-49ba-8687-5e3eecdee283-sasl-users\") pod \"default-interconnect-55bf8d5cb-xprn5\" (UID: \"a46b6cc9-3c89-49ba-8687-5e3eecdee283\") " pod="service-telemetry/default-interconnect-55bf8d5cb-xprn5" Jan 21 00:25:57 crc kubenswrapper[5118]: I0121 00:25:57.970437 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/a46b6cc9-3c89-49ba-8687-5e3eecdee283-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-xprn5\" (UID: \"a46b6cc9-3c89-49ba-8687-5e3eecdee283\") " pod="service-telemetry/default-interconnect-55bf8d5cb-xprn5" Jan 21 00:25:57 crc kubenswrapper[5118]: I0121 00:25:57.975456 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/a46b6cc9-3c89-49ba-8687-5e3eecdee283-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-xprn5\" (UID: \"a46b6cc9-3c89-49ba-8687-5e3eecdee283\") " pod="service-telemetry/default-interconnect-55bf8d5cb-xprn5" Jan 21 00:25:57 crc kubenswrapper[5118]: I0121 00:25:57.977949 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/a46b6cc9-3c89-49ba-8687-5e3eecdee283-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-xprn5\" (UID: \"a46b6cc9-3c89-49ba-8687-5e3eecdee283\") " pod="service-telemetry/default-interconnect-55bf8d5cb-xprn5" Jan 21 00:25:57 crc kubenswrapper[5118]: I0121 00:25:57.978073 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/a46b6cc9-3c89-49ba-8687-5e3eecdee283-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-xprn5\" (UID: \"a46b6cc9-3c89-49ba-8687-5e3eecdee283\") " pod="service-telemetry/default-interconnect-55bf8d5cb-xprn5" Jan 21 00:25:57 crc kubenswrapper[5118]: I0121 00:25:57.988493 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8gnc\" (UniqueName: \"kubernetes.io/projected/a46b6cc9-3c89-49ba-8687-5e3eecdee283-kube-api-access-k8gnc\") pod \"default-interconnect-55bf8d5cb-xprn5\" (UID: \"a46b6cc9-3c89-49ba-8687-5e3eecdee283\") " pod="service-telemetry/default-interconnect-55bf8d5cb-xprn5" Jan 21 00:25:58 crc kubenswrapper[5118]: I0121 00:25:58.146455 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-xprn5" Jan 21 00:25:58 crc kubenswrapper[5118]: I0121 00:25:58.490140 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-tp7hs" event={"ID":"26e412ff-6455-444c-a91c-350651a82800","Type":"ContainerStarted","Data":"a3d28254a6dab2a60ebf5a2bc1c2796db8c85b94da5b91ee378d5500920bed82"} Jan 21 00:25:58 crc kubenswrapper[5118]: I0121 00:25:58.492010 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-98d47fbd5-w8mnl" event={"ID":"1eae1c63-1b25-4f53-83e6-cc1fcd7cf325","Type":"ContainerStarted","Data":"483741b330a24e1cf5cb3adc5063414109c29156e0e9d218692bd12feac6e8b4"} Jan 21 00:25:58 crc kubenswrapper[5118]: I0121 00:25:58.494387 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pjrqv" event={"ID":"d85a2d17-ce06-446d-aabc-cc02486c78eb","Type":"ContainerStarted","Data":"69b00a0156e0303cbc14bbf1207bef44387ecf9f8dd5078ea533d0be25181729"} Jan 21 00:25:58 crc kubenswrapper[5118]: I0121 00:25:58.496972 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-879kb" event={"ID":"24b1ba21-7cfb-4bdc-84e7-63e5bacee435","Type":"ContainerStarted","Data":"b7188e6695c47acff7a12e7053541ee80d27aee4b735b07ca85eebea5ec6a8cb"} Jan 21 00:25:58 crc kubenswrapper[5118]: I0121 00:25:58.498578 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-7f9f666f58-z4s4h" event={"ID":"acab55a4-1334-4e6a-9160-0693a38de48d","Type":"ContainerStarted","Data":"649f751907bec469a39590438ae3fd82c040b7da503a8cbab1b8cb5a3d46cd75"} Jan 21 00:25:58 crc kubenswrapper[5118]: I0121 00:25:58.500361 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-8wgmh" event={"ID":"c7620811-1f39-4605-90d6-9b447532937f","Type":"ContainerDied","Data":"bf3233a447ba0be5b78ee5f3d88faacd0222e3b672c9126feed07e5fa139fd2c"} Jan 21 00:25:58 crc kubenswrapper[5118]: I0121 00:25:58.500394 5118 scope.go:117] "RemoveContainer" containerID="f1fc1959422e725ed561d62b5a5294dd7d8683eaf6ebe4655d0f9145228a6273" Jan 21 00:25:58 crc kubenswrapper[5118]: I0121 00:25:58.500422 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-8wgmh" Jan 21 00:25:58 crc kubenswrapper[5118]: I0121 00:25:58.504655 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"7ce5576f-ce16-4a13-9cca-8fcc68e399e7","Type":"ContainerStarted","Data":"f4461eb6fce268583d480a8db0dc7f0be472cd50329d47e00de38fa480a17965"} Jan 21 00:25:58 crc kubenswrapper[5118]: I0121 00:25:58.536264 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/alertmanager-default-0" podStartSLOduration=32.131944749 podStartE2EDuration="47.536245191s" podCreationTimestamp="2026-01-21 00:25:11 +0000 UTC" firstStartedPulling="2026-01-21 00:25:42.325077524 +0000 UTC m=+997.649324542" lastFinishedPulling="2026-01-21 00:25:57.729377966 +0000 UTC m=+1013.053624984" observedRunningTime="2026-01-21 00:25:58.532619674 +0000 UTC m=+1013.856866712" watchObservedRunningTime="2026-01-21 00:25:58.536245191 +0000 UTC m=+1013.860492199" Jan 21 00:25:58 crc kubenswrapper[5118]: I0121 00:25:58.561029 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-8wgmh"] Jan 21 00:25:58 crc kubenswrapper[5118]: I0121 00:25:58.567421 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-xprn5"] Jan 21 00:25:58 crc kubenswrapper[5118]: I0121 00:25:58.572107 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-8wgmh"] Jan 21 00:25:58 crc kubenswrapper[5118]: W0121 00:25:58.574259 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda46b6cc9_3c89_49ba_8687_5e3eecdee283.slice/crio-0f7a899e86401b1ca19bded83299d6eb69f13782c12afabb134e63825bb5a060 WatchSource:0}: Error finding container 0f7a899e86401b1ca19bded83299d6eb69f13782c12afabb134e63825bb5a060: Status 404 returned error can't find the container with id 0f7a899e86401b1ca19bded83299d6eb69f13782c12afabb134e63825bb5a060 Jan 21 00:25:58 crc kubenswrapper[5118]: I0121 00:25:58.987372 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7620811-1f39-4605-90d6-9b447532937f" path="/var/lib/kubelet/pods/c7620811-1f39-4605-90d6-9b447532937f/volumes" Jan 21 00:25:59 crc kubenswrapper[5118]: I0121 00:25:59.514707 5118 generic.go:358] "Generic (PLEG): container finished" podID="d85a2d17-ce06-446d-aabc-cc02486c78eb" containerID="69b00a0156e0303cbc14bbf1207bef44387ecf9f8dd5078ea533d0be25181729" exitCode=0 Jan 21 00:25:59 crc kubenswrapper[5118]: I0121 00:25:59.514815 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pjrqv" event={"ID":"d85a2d17-ce06-446d-aabc-cc02486c78eb","Type":"ContainerDied","Data":"69b00a0156e0303cbc14bbf1207bef44387ecf9f8dd5078ea533d0be25181729"} Jan 21 00:25:59 crc kubenswrapper[5118]: I0121 00:25:59.519126 5118 generic.go:358] "Generic (PLEG): container finished" podID="24b1ba21-7cfb-4bdc-84e7-63e5bacee435" containerID="b7188e6695c47acff7a12e7053541ee80d27aee4b735b07ca85eebea5ec6a8cb" exitCode=0 Jan 21 00:25:59 crc kubenswrapper[5118]: I0121 00:25:59.519380 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-879kb" event={"ID":"24b1ba21-7cfb-4bdc-84e7-63e5bacee435","Type":"ContainerDied","Data":"b7188e6695c47acff7a12e7053541ee80d27aee4b735b07ca85eebea5ec6a8cb"} Jan 21 00:25:59 crc kubenswrapper[5118]: I0121 00:25:59.525411 5118 generic.go:358] "Generic (PLEG): container finished" podID="acab55a4-1334-4e6a-9160-0693a38de48d" containerID="649f751907bec469a39590438ae3fd82c040b7da503a8cbab1b8cb5a3d46cd75" exitCode=0 Jan 21 00:25:59 crc kubenswrapper[5118]: I0121 00:25:59.525534 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-7f9f666f58-z4s4h" event={"ID":"acab55a4-1334-4e6a-9160-0693a38de48d","Type":"ContainerDied","Data":"649f751907bec469a39590438ae3fd82c040b7da503a8cbab1b8cb5a3d46cd75"} Jan 21 00:25:59 crc kubenswrapper[5118]: I0121 00:25:59.529695 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-xprn5" event={"ID":"a46b6cc9-3c89-49ba-8687-5e3eecdee283","Type":"ContainerStarted","Data":"3192f18e6bc667919c03f8bdd5ec973be7e709d590e8e0a775d23f1b22b7250d"} Jan 21 00:25:59 crc kubenswrapper[5118]: I0121 00:25:59.529747 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-xprn5" event={"ID":"a46b6cc9-3c89-49ba-8687-5e3eecdee283","Type":"ContainerStarted","Data":"0f7a899e86401b1ca19bded83299d6eb69f13782c12afabb134e63825bb5a060"} Jan 21 00:25:59 crc kubenswrapper[5118]: I0121 00:25:59.532933 5118 generic.go:358] "Generic (PLEG): container finished" podID="26e412ff-6455-444c-a91c-350651a82800" containerID="a3d28254a6dab2a60ebf5a2bc1c2796db8c85b94da5b91ee378d5500920bed82" exitCode=0 Jan 21 00:25:59 crc kubenswrapper[5118]: I0121 00:25:59.533005 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-tp7hs" event={"ID":"26e412ff-6455-444c-a91c-350651a82800","Type":"ContainerDied","Data":"a3d28254a6dab2a60ebf5a2bc1c2796db8c85b94da5b91ee378d5500920bed82"} Jan 21 00:25:59 crc kubenswrapper[5118]: I0121 00:25:59.534762 5118 generic.go:358] "Generic (PLEG): container finished" podID="1eae1c63-1b25-4f53-83e6-cc1fcd7cf325" containerID="483741b330a24e1cf5cb3adc5063414109c29156e0e9d218692bd12feac6e8b4" exitCode=0 Jan 21 00:25:59 crc kubenswrapper[5118]: I0121 00:25:59.535579 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-98d47fbd5-w8mnl" event={"ID":"1eae1c63-1b25-4f53-83e6-cc1fcd7cf325","Type":"ContainerDied","Data":"483741b330a24e1cf5cb3adc5063414109c29156e0e9d218692bd12feac6e8b4"} Jan 21 00:25:59 crc kubenswrapper[5118]: I0121 00:25:59.551060 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-interconnect-55bf8d5cb-xprn5" podStartSLOduration=3.551041014 podStartE2EDuration="3.551041014s" podCreationTimestamp="2026-01-21 00:25:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:25:59.54866195 +0000 UTC m=+1014.872908978" watchObservedRunningTime="2026-01-21 00:25:59.551041014 +0000 UTC m=+1014.875288032" Jan 21 00:26:00 crc kubenswrapper[5118]: I0121 00:26:00.127219 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/prometheus-default-0" Jan 21 00:26:00 crc kubenswrapper[5118]: I0121 00:26:00.156964 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29482586-d784z"] Jan 21 00:26:00 crc kubenswrapper[5118]: I0121 00:26:00.165898 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29482586-d784z"] Jan 21 00:26:00 crc kubenswrapper[5118]: I0121 00:26:00.166014 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482586-d784z" Jan 21 00:26:00 crc kubenswrapper[5118]: I0121 00:26:00.167888 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 00:26:00 crc kubenswrapper[5118]: I0121 00:26:00.168652 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-tk7qc\"" Jan 21 00:26:00 crc kubenswrapper[5118]: I0121 00:26:00.168896 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 00:26:00 crc kubenswrapper[5118]: I0121 00:26:00.216150 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/prometheus-default-0" Jan 21 00:26:00 crc kubenswrapper[5118]: I0121 00:26:00.313271 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jwzg\" (UniqueName: \"kubernetes.io/projected/68ab75a7-408a-4c6c-b232-3b08fa01168f-kube-api-access-2jwzg\") pod \"auto-csr-approver-29482586-d784z\" (UID: \"68ab75a7-408a-4c6c-b232-3b08fa01168f\") " pod="openshift-infra/auto-csr-approver-29482586-d784z" Jan 21 00:26:00 crc kubenswrapper[5118]: I0121 00:26:00.415198 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2jwzg\" (UniqueName: \"kubernetes.io/projected/68ab75a7-408a-4c6c-b232-3b08fa01168f-kube-api-access-2jwzg\") pod \"auto-csr-approver-29482586-d784z\" (UID: \"68ab75a7-408a-4c6c-b232-3b08fa01168f\") " pod="openshift-infra/auto-csr-approver-29482586-d784z" Jan 21 00:26:00 crc kubenswrapper[5118]: I0121 00:26:00.435709 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jwzg\" (UniqueName: \"kubernetes.io/projected/68ab75a7-408a-4c6c-b232-3b08fa01168f-kube-api-access-2jwzg\") pod \"auto-csr-approver-29482586-d784z\" (UID: \"68ab75a7-408a-4c6c-b232-3b08fa01168f\") " pod="openshift-infra/auto-csr-approver-29482586-d784z" Jan 21 00:26:00 crc kubenswrapper[5118]: I0121 00:26:00.488006 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482586-d784z" Jan 21 00:26:00 crc kubenswrapper[5118]: I0121 00:26:00.586230 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/prometheus-default-0" Jan 21 00:26:00 crc kubenswrapper[5118]: I0121 00:26:00.993123 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29482586-d784z"] Jan 21 00:26:00 crc kubenswrapper[5118]: W0121 00:26:00.999629 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod68ab75a7_408a_4c6c_b232_3b08fa01168f.slice/crio-b64e3b17ecc5d073ee53d02cb3d85b91577988f4c568282cedf644371523f9d7 WatchSource:0}: Error finding container b64e3b17ecc5d073ee53d02cb3d85b91577988f4c568282cedf644371523f9d7: Status 404 returned error can't find the container with id b64e3b17ecc5d073ee53d02cb3d85b91577988f4c568282cedf644371523f9d7 Jan 21 00:26:01 crc kubenswrapper[5118]: I0121 00:26:01.549636 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29482586-d784z" event={"ID":"68ab75a7-408a-4c6c-b232-3b08fa01168f","Type":"ContainerStarted","Data":"b64e3b17ecc5d073ee53d02cb3d85b91577988f4c568282cedf644371523f9d7"} Jan 21 00:26:04 crc kubenswrapper[5118]: I0121 00:26:04.574509 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-tp7hs" event={"ID":"26e412ff-6455-444c-a91c-350651a82800","Type":"ContainerStarted","Data":"5d459bf3386773c9dda417fa6a07fdeae0cdef72d0feb8a7c1d583f9f15c1661"} Jan 21 00:26:04 crc kubenswrapper[5118]: I0121 00:26:04.575308 5118 scope.go:117] "RemoveContainer" containerID="a3d28254a6dab2a60ebf5a2bc1c2796db8c85b94da5b91ee378d5500920bed82" Jan 21 00:26:04 crc kubenswrapper[5118]: I0121 00:26:04.579743 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-98d47fbd5-w8mnl" event={"ID":"1eae1c63-1b25-4f53-83e6-cc1fcd7cf325","Type":"ContainerStarted","Data":"1a75281abbb33169c323d522e105f10db0b376bf352a79517199fe6a420af7f6"} Jan 21 00:26:04 crc kubenswrapper[5118]: I0121 00:26:04.580257 5118 scope.go:117] "RemoveContainer" containerID="483741b330a24e1cf5cb3adc5063414109c29156e0e9d218692bd12feac6e8b4" Jan 21 00:26:04 crc kubenswrapper[5118]: I0121 00:26:04.583332 5118 generic.go:358] "Generic (PLEG): container finished" podID="68ab75a7-408a-4c6c-b232-3b08fa01168f" containerID="e7c14aa1a065f0bed30e2daf241d55f897716ea6e853a751286554e62ed26430" exitCode=0 Jan 21 00:26:04 crc kubenswrapper[5118]: I0121 00:26:04.583462 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29482586-d784z" event={"ID":"68ab75a7-408a-4c6c-b232-3b08fa01168f","Type":"ContainerDied","Data":"e7c14aa1a065f0bed30e2daf241d55f897716ea6e853a751286554e62ed26430"} Jan 21 00:26:04 crc kubenswrapper[5118]: I0121 00:26:04.591408 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pjrqv" event={"ID":"d85a2d17-ce06-446d-aabc-cc02486c78eb","Type":"ContainerStarted","Data":"498be04683fbb7299f11e23afe189ee66bf2dcada8d87d187053f9d66008348f"} Jan 21 00:26:04 crc kubenswrapper[5118]: I0121 00:26:04.592464 5118 scope.go:117] "RemoveContainer" containerID="69b00a0156e0303cbc14bbf1207bef44387ecf9f8dd5078ea533d0be25181729" Jan 21 00:26:04 crc kubenswrapper[5118]: I0121 00:26:04.611132 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-879kb" event={"ID":"24b1ba21-7cfb-4bdc-84e7-63e5bacee435","Type":"ContainerStarted","Data":"990243d51dedc7980aac4273c05e45a3d4d9a4d3783f62f73596f07ed05f30b5"} Jan 21 00:26:04 crc kubenswrapper[5118]: I0121 00:26:04.611799 5118 scope.go:117] "RemoveContainer" containerID="b7188e6695c47acff7a12e7053541ee80d27aee4b735b07ca85eebea5ec6a8cb" Jan 21 00:26:04 crc kubenswrapper[5118]: I0121 00:26:04.617681 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-7f9f666f58-z4s4h" event={"ID":"acab55a4-1334-4e6a-9160-0693a38de48d","Type":"ContainerStarted","Data":"3ecb51684f56a9e1811251ef29b60d7b10ff3b78aae4677157648658af5ebdab"} Jan 21 00:26:04 crc kubenswrapper[5118]: I0121 00:26:04.618442 5118 scope.go:117] "RemoveContainer" containerID="649f751907bec469a39590438ae3fd82c040b7da503a8cbab1b8cb5a3d46cd75" Jan 21 00:26:05 crc kubenswrapper[5118]: I0121 00:26:05.626350 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-tp7hs" event={"ID":"26e412ff-6455-444c-a91c-350651a82800","Type":"ContainerStarted","Data":"cc1759be7d172c031dc325cb14271f9836225c4fbc7f2a429335bd43f291fb6b"} Jan 21 00:26:05 crc kubenswrapper[5118]: I0121 00:26:05.630030 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-98d47fbd5-w8mnl" event={"ID":"1eae1c63-1b25-4f53-83e6-cc1fcd7cf325","Type":"ContainerStarted","Data":"f485508a6510dbfd1d2c15e2d0e64232223143ef93fdc7c82d7f02a834085e68"} Jan 21 00:26:05 crc kubenswrapper[5118]: I0121 00:26:05.632482 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pjrqv" event={"ID":"d85a2d17-ce06-446d-aabc-cc02486c78eb","Type":"ContainerStarted","Data":"61912a89d0814a2df6151e187957a604bb01ba2da8dcf8622f17fe5dc3ada5e8"} Jan 21 00:26:05 crc kubenswrapper[5118]: I0121 00:26:05.638877 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-879kb" event={"ID":"24b1ba21-7cfb-4bdc-84e7-63e5bacee435","Type":"ContainerStarted","Data":"70029a5035a8382499427413abdb2f73dff7b18578e95ee02eb932b02962535d"} Jan 21 00:26:05 crc kubenswrapper[5118]: I0121 00:26:05.642106 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-7f9f666f58-z4s4h" event={"ID":"acab55a4-1334-4e6a-9160-0693a38de48d","Type":"ContainerStarted","Data":"068799f7ef9a238da3748a4273d71325e88b86df695e1dfee462a9e9c22365db"} Jan 21 00:26:05 crc kubenswrapper[5118]: I0121 00:26:05.648750 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-tp7hs" podStartSLOduration=5.203620531 podStartE2EDuration="36.648731036s" podCreationTimestamp="2026-01-21 00:25:29 +0000 UTC" firstStartedPulling="2026-01-21 00:25:33.770865746 +0000 UTC m=+989.095112764" lastFinishedPulling="2026-01-21 00:26:05.215976251 +0000 UTC m=+1020.540223269" observedRunningTime="2026-01-21 00:26:05.645218542 +0000 UTC m=+1020.969465580" watchObservedRunningTime="2026-01-21 00:26:05.648731036 +0000 UTC m=+1020.972978064" Jan 21 00:26:05 crc kubenswrapper[5118]: I0121 00:26:05.681572 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-98d47fbd5-w8mnl" podStartSLOduration=3.44324358 podStartE2EDuration="23.681548772s" podCreationTimestamp="2026-01-21 00:25:42 +0000 UTC" firstStartedPulling="2026-01-21 00:25:45.012778176 +0000 UTC m=+1000.337025184" lastFinishedPulling="2026-01-21 00:26:05.251083358 +0000 UTC m=+1020.575330376" observedRunningTime="2026-01-21 00:26:05.665653788 +0000 UTC m=+1020.989900816" watchObservedRunningTime="2026-01-21 00:26:05.681548772 +0000 UTC m=+1021.005795790" Jan 21 00:26:05 crc kubenswrapper[5118]: I0121 00:26:05.691581 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pjrqv" podStartSLOduration=3.045703095 podStartE2EDuration="32.691563129s" podCreationTimestamp="2026-01-21 00:25:33 +0000 UTC" firstStartedPulling="2026-01-21 00:25:35.5927251 +0000 UTC m=+990.916972118" lastFinishedPulling="2026-01-21 00:26:05.238585124 +0000 UTC m=+1020.562832152" observedRunningTime="2026-01-21 00:26:05.691518868 +0000 UTC m=+1021.015765906" watchObservedRunningTime="2026-01-21 00:26:05.691563129 +0000 UTC m=+1021.015810157" Jan 21 00:26:05 crc kubenswrapper[5118]: I0121 00:26:05.731790 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-coll-event-smartgateway-7f9f666f58-z4s4h" podStartSLOduration=4.512669851 podStartE2EDuration="24.731767441s" podCreationTimestamp="2026-01-21 00:25:41 +0000 UTC" firstStartedPulling="2026-01-21 00:25:44.938824113 +0000 UTC m=+1000.263071131" lastFinishedPulling="2026-01-21 00:26:05.157921703 +0000 UTC m=+1020.482168721" observedRunningTime="2026-01-21 00:26:05.717213113 +0000 UTC m=+1021.041460131" watchObservedRunningTime="2026-01-21 00:26:05.731767441 +0000 UTC m=+1021.056014459" Jan 21 00:26:05 crc kubenswrapper[5118]: I0121 00:26:05.747854 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-879kb" podStartSLOduration=7.05141133 podStartE2EDuration="38.74782972s" podCreationTimestamp="2026-01-21 00:25:27 +0000 UTC" firstStartedPulling="2026-01-21 00:25:33.450445638 +0000 UTC m=+988.774692656" lastFinishedPulling="2026-01-21 00:26:05.146864028 +0000 UTC m=+1020.471111046" observedRunningTime="2026-01-21 00:26:05.741583613 +0000 UTC m=+1021.065830641" watchObservedRunningTime="2026-01-21 00:26:05.74782972 +0000 UTC m=+1021.072076738" Jan 21 00:26:05 crc kubenswrapper[5118]: I0121 00:26:05.921985 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482586-d784z" Jan 21 00:26:06 crc kubenswrapper[5118]: I0121 00:26:06.003121 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2jwzg\" (UniqueName: \"kubernetes.io/projected/68ab75a7-408a-4c6c-b232-3b08fa01168f-kube-api-access-2jwzg\") pod \"68ab75a7-408a-4c6c-b232-3b08fa01168f\" (UID: \"68ab75a7-408a-4c6c-b232-3b08fa01168f\") " Jan 21 00:26:06 crc kubenswrapper[5118]: I0121 00:26:06.012386 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68ab75a7-408a-4c6c-b232-3b08fa01168f-kube-api-access-2jwzg" (OuterVolumeSpecName: "kube-api-access-2jwzg") pod "68ab75a7-408a-4c6c-b232-3b08fa01168f" (UID: "68ab75a7-408a-4c6c-b232-3b08fa01168f"). InnerVolumeSpecName "kube-api-access-2jwzg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:26:06 crc kubenswrapper[5118]: I0121 00:26:06.105195 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2jwzg\" (UniqueName: \"kubernetes.io/projected/68ab75a7-408a-4c6c-b232-3b08fa01168f-kube-api-access-2jwzg\") on node \"crc\" DevicePath \"\"" Jan 21 00:26:06 crc kubenswrapper[5118]: I0121 00:26:06.650656 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482586-d784z" Jan 21 00:26:06 crc kubenswrapper[5118]: I0121 00:26:06.655357 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29482586-d784z" event={"ID":"68ab75a7-408a-4c6c-b232-3b08fa01168f","Type":"ContainerDied","Data":"b64e3b17ecc5d073ee53d02cb3d85b91577988f4c568282cedf644371523f9d7"} Jan 21 00:26:06 crc kubenswrapper[5118]: I0121 00:26:06.655419 5118 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b64e3b17ecc5d073ee53d02cb3d85b91577988f4c568282cedf644371523f9d7" Jan 21 00:26:06 crc kubenswrapper[5118]: I0121 00:26:06.986460 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29482580-tdx44"] Jan 21 00:26:06 crc kubenswrapper[5118]: I0121 00:26:06.992411 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29482580-tdx44"] Jan 21 00:26:08 crc kubenswrapper[5118]: I0121 00:26:08.985037 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6bed478-86fa-4a4e-a75e-02a576884ad1" path="/var/lib/kubelet/pods/b6bed478-86fa-4a4e-a75e-02a576884ad1/volumes" Jan 21 00:26:16 crc kubenswrapper[5118]: I0121 00:26:16.127326 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/qdr-test"] Jan 21 00:26:16 crc kubenswrapper[5118]: I0121 00:26:16.128693 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="68ab75a7-408a-4c6c-b232-3b08fa01168f" containerName="oc" Jan 21 00:26:16 crc kubenswrapper[5118]: I0121 00:26:16.128714 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="68ab75a7-408a-4c6c-b232-3b08fa01168f" containerName="oc" Jan 21 00:26:16 crc kubenswrapper[5118]: I0121 00:26:16.128893 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="68ab75a7-408a-4c6c-b232-3b08fa01168f" containerName="oc" Jan 21 00:26:16 crc kubenswrapper[5118]: I0121 00:26:16.158467 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/qdr-test"] Jan 21 00:26:16 crc kubenswrapper[5118]: I0121 00:26:16.158584 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/qdr-test" Jan 21 00:26:16 crc kubenswrapper[5118]: I0121 00:26:16.161494 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-selfsigned\"" Jan 21 00:26:16 crc kubenswrapper[5118]: I0121 00:26:16.161542 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"qdr-test-config\"" Jan 21 00:26:16 crc kubenswrapper[5118]: I0121 00:26:16.286994 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/b3c73688-0f77-4726-85bc-6a81cf3ff214-qdr-test-config\") pod \"qdr-test\" (UID: \"b3c73688-0f77-4726-85bc-6a81cf3ff214\") " pod="service-telemetry/qdr-test" Jan 21 00:26:16 crc kubenswrapper[5118]: I0121 00:26:16.287086 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twsqp\" (UniqueName: \"kubernetes.io/projected/b3c73688-0f77-4726-85bc-6a81cf3ff214-kube-api-access-twsqp\") pod \"qdr-test\" (UID: \"b3c73688-0f77-4726-85bc-6a81cf3ff214\") " pod="service-telemetry/qdr-test" Jan 21 00:26:16 crc kubenswrapper[5118]: I0121 00:26:16.287111 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/b3c73688-0f77-4726-85bc-6a81cf3ff214-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"b3c73688-0f77-4726-85bc-6a81cf3ff214\") " pod="service-telemetry/qdr-test" Jan 21 00:26:16 crc kubenswrapper[5118]: I0121 00:26:16.388746 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/b3c73688-0f77-4726-85bc-6a81cf3ff214-qdr-test-config\") pod \"qdr-test\" (UID: \"b3c73688-0f77-4726-85bc-6a81cf3ff214\") " pod="service-telemetry/qdr-test" Jan 21 00:26:16 crc kubenswrapper[5118]: I0121 00:26:16.388878 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-twsqp\" (UniqueName: \"kubernetes.io/projected/b3c73688-0f77-4726-85bc-6a81cf3ff214-kube-api-access-twsqp\") pod \"qdr-test\" (UID: \"b3c73688-0f77-4726-85bc-6a81cf3ff214\") " pod="service-telemetry/qdr-test" Jan 21 00:26:16 crc kubenswrapper[5118]: I0121 00:26:16.388908 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/b3c73688-0f77-4726-85bc-6a81cf3ff214-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"b3c73688-0f77-4726-85bc-6a81cf3ff214\") " pod="service-telemetry/qdr-test" Jan 21 00:26:16 crc kubenswrapper[5118]: I0121 00:26:16.390096 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/b3c73688-0f77-4726-85bc-6a81cf3ff214-qdr-test-config\") pod \"qdr-test\" (UID: \"b3c73688-0f77-4726-85bc-6a81cf3ff214\") " pod="service-telemetry/qdr-test" Jan 21 00:26:16 crc kubenswrapper[5118]: I0121 00:26:16.395605 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/b3c73688-0f77-4726-85bc-6a81cf3ff214-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"b3c73688-0f77-4726-85bc-6a81cf3ff214\") " pod="service-telemetry/qdr-test" Jan 21 00:26:16 crc kubenswrapper[5118]: I0121 00:26:16.406097 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-twsqp\" (UniqueName: \"kubernetes.io/projected/b3c73688-0f77-4726-85bc-6a81cf3ff214-kube-api-access-twsqp\") pod \"qdr-test\" (UID: \"b3c73688-0f77-4726-85bc-6a81cf3ff214\") " pod="service-telemetry/qdr-test" Jan 21 00:26:16 crc kubenswrapper[5118]: I0121 00:26:16.483700 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/qdr-test" Jan 21 00:26:16 crc kubenswrapper[5118]: I0121 00:26:16.931501 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/qdr-test"] Jan 21 00:26:16 crc kubenswrapper[5118]: W0121 00:26:16.933286 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb3c73688_0f77_4726_85bc_6a81cf3ff214.slice/crio-f620204e4d9fea8d007bf30cf28dd698c107c5349649e3da9cd74c8fe3b64938 WatchSource:0}: Error finding container f620204e4d9fea8d007bf30cf28dd698c107c5349649e3da9cd74c8fe3b64938: Status 404 returned error can't find the container with id f620204e4d9fea8d007bf30cf28dd698c107c5349649e3da9cd74c8fe3b64938 Jan 21 00:26:17 crc kubenswrapper[5118]: I0121 00:26:17.753088 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/qdr-test" event={"ID":"b3c73688-0f77-4726-85bc-6a81cf3ff214","Type":"ContainerStarted","Data":"f620204e4d9fea8d007bf30cf28dd698c107c5349649e3da9cd74c8fe3b64938"} Jan 21 00:26:27 crc kubenswrapper[5118]: I0121 00:26:27.848142 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/qdr-test" event={"ID":"b3c73688-0f77-4726-85bc-6a81cf3ff214","Type":"ContainerStarted","Data":"dd24f1c4d3d13355cfd7b738bcc8bc2dd3db516f1400ad4141f2969d70c3a480"} Jan 21 00:26:27 crc kubenswrapper[5118]: I0121 00:26:27.871633 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/qdr-test" podStartSLOduration=1.60809973 podStartE2EDuration="11.871603851s" podCreationTimestamp="2026-01-21 00:26:16 +0000 UTC" firstStartedPulling="2026-01-21 00:26:16.936108948 +0000 UTC m=+1032.260355976" lastFinishedPulling="2026-01-21 00:26:27.199613079 +0000 UTC m=+1042.523860097" observedRunningTime="2026-01-21 00:26:27.863592337 +0000 UTC m=+1043.187839365" watchObservedRunningTime="2026-01-21 00:26:27.871603851 +0000 UTC m=+1043.195850879" Jan 21 00:26:28 crc kubenswrapper[5118]: I0121 00:26:28.164918 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/stf-smoketest-smoke1-lkxrn"] Jan 21 00:26:28 crc kubenswrapper[5118]: I0121 00:26:28.173206 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-lkxrn" Jan 21 00:26:28 crc kubenswrapper[5118]: I0121 00:26:28.176778 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-collectd-config\"" Jan 21 00:26:28 crc kubenswrapper[5118]: I0121 00:26:28.177391 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-ceilometer-entrypoint-script\"" Jan 21 00:26:28 crc kubenswrapper[5118]: I0121 00:26:28.178412 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-ceilometer-publisher\"" Jan 21 00:26:28 crc kubenswrapper[5118]: I0121 00:26:28.178485 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-healthcheck-log\"" Jan 21 00:26:28 crc kubenswrapper[5118]: I0121 00:26:28.178549 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-sensubility-config\"" Jan 21 00:26:28 crc kubenswrapper[5118]: I0121 00:26:28.181077 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-collectd-entrypoint-script\"" Jan 21 00:26:28 crc kubenswrapper[5118]: I0121 00:26:28.189686 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-lkxrn"] Jan 21 00:26:28 crc kubenswrapper[5118]: I0121 00:26:28.270058 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/5b6b7ed3-089a-4de8-a5b2-894a2ef33492-healthcheck-log\") pod \"stf-smoketest-smoke1-lkxrn\" (UID: \"5b6b7ed3-089a-4de8-a5b2-894a2ef33492\") " pod="service-telemetry/stf-smoketest-smoke1-lkxrn" Jan 21 00:26:28 crc kubenswrapper[5118]: I0121 00:26:28.270141 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/5b6b7ed3-089a-4de8-a5b2-894a2ef33492-ceilometer-publisher\") pod \"stf-smoketest-smoke1-lkxrn\" (UID: \"5b6b7ed3-089a-4de8-a5b2-894a2ef33492\") " pod="service-telemetry/stf-smoketest-smoke1-lkxrn" Jan 21 00:26:28 crc kubenswrapper[5118]: I0121 00:26:28.270198 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/5b6b7ed3-089a-4de8-a5b2-894a2ef33492-sensubility-config\") pod \"stf-smoketest-smoke1-lkxrn\" (UID: \"5b6b7ed3-089a-4de8-a5b2-894a2ef33492\") " pod="service-telemetry/stf-smoketest-smoke1-lkxrn" Jan 21 00:26:28 crc kubenswrapper[5118]: I0121 00:26:28.270300 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/5b6b7ed3-089a-4de8-a5b2-894a2ef33492-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-lkxrn\" (UID: \"5b6b7ed3-089a-4de8-a5b2-894a2ef33492\") " pod="service-telemetry/stf-smoketest-smoke1-lkxrn" Jan 21 00:26:28 crc kubenswrapper[5118]: I0121 00:26:28.270470 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmvdm\" (UniqueName: \"kubernetes.io/projected/5b6b7ed3-089a-4de8-a5b2-894a2ef33492-kube-api-access-pmvdm\") pod \"stf-smoketest-smoke1-lkxrn\" (UID: \"5b6b7ed3-089a-4de8-a5b2-894a2ef33492\") " pod="service-telemetry/stf-smoketest-smoke1-lkxrn" Jan 21 00:26:28 crc kubenswrapper[5118]: I0121 00:26:28.270502 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/5b6b7ed3-089a-4de8-a5b2-894a2ef33492-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-lkxrn\" (UID: \"5b6b7ed3-089a-4de8-a5b2-894a2ef33492\") " pod="service-telemetry/stf-smoketest-smoke1-lkxrn" Jan 21 00:26:28 crc kubenswrapper[5118]: I0121 00:26:28.270527 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/5b6b7ed3-089a-4de8-a5b2-894a2ef33492-collectd-config\") pod \"stf-smoketest-smoke1-lkxrn\" (UID: \"5b6b7ed3-089a-4de8-a5b2-894a2ef33492\") " pod="service-telemetry/stf-smoketest-smoke1-lkxrn" Jan 21 00:26:28 crc kubenswrapper[5118]: I0121 00:26:28.371991 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/5b6b7ed3-089a-4de8-a5b2-894a2ef33492-ceilometer-publisher\") pod \"stf-smoketest-smoke1-lkxrn\" (UID: \"5b6b7ed3-089a-4de8-a5b2-894a2ef33492\") " pod="service-telemetry/stf-smoketest-smoke1-lkxrn" Jan 21 00:26:28 crc kubenswrapper[5118]: I0121 00:26:28.372115 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/5b6b7ed3-089a-4de8-a5b2-894a2ef33492-sensubility-config\") pod \"stf-smoketest-smoke1-lkxrn\" (UID: \"5b6b7ed3-089a-4de8-a5b2-894a2ef33492\") " pod="service-telemetry/stf-smoketest-smoke1-lkxrn" Jan 21 00:26:28 crc kubenswrapper[5118]: I0121 00:26:28.372375 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/5b6b7ed3-089a-4de8-a5b2-894a2ef33492-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-lkxrn\" (UID: \"5b6b7ed3-089a-4de8-a5b2-894a2ef33492\") " pod="service-telemetry/stf-smoketest-smoke1-lkxrn" Jan 21 00:26:28 crc kubenswrapper[5118]: I0121 00:26:28.372606 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pmvdm\" (UniqueName: \"kubernetes.io/projected/5b6b7ed3-089a-4de8-a5b2-894a2ef33492-kube-api-access-pmvdm\") pod \"stf-smoketest-smoke1-lkxrn\" (UID: \"5b6b7ed3-089a-4de8-a5b2-894a2ef33492\") " pod="service-telemetry/stf-smoketest-smoke1-lkxrn" Jan 21 00:26:28 crc kubenswrapper[5118]: I0121 00:26:28.372670 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/5b6b7ed3-089a-4de8-a5b2-894a2ef33492-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-lkxrn\" (UID: \"5b6b7ed3-089a-4de8-a5b2-894a2ef33492\") " pod="service-telemetry/stf-smoketest-smoke1-lkxrn" Jan 21 00:26:28 crc kubenswrapper[5118]: I0121 00:26:28.372725 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/5b6b7ed3-089a-4de8-a5b2-894a2ef33492-collectd-config\") pod \"stf-smoketest-smoke1-lkxrn\" (UID: \"5b6b7ed3-089a-4de8-a5b2-894a2ef33492\") " pod="service-telemetry/stf-smoketest-smoke1-lkxrn" Jan 21 00:26:28 crc kubenswrapper[5118]: I0121 00:26:28.372763 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/5b6b7ed3-089a-4de8-a5b2-894a2ef33492-healthcheck-log\") pod \"stf-smoketest-smoke1-lkxrn\" (UID: \"5b6b7ed3-089a-4de8-a5b2-894a2ef33492\") " pod="service-telemetry/stf-smoketest-smoke1-lkxrn" Jan 21 00:26:28 crc kubenswrapper[5118]: I0121 00:26:28.374090 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/5b6b7ed3-089a-4de8-a5b2-894a2ef33492-ceilometer-publisher\") pod \"stf-smoketest-smoke1-lkxrn\" (UID: \"5b6b7ed3-089a-4de8-a5b2-894a2ef33492\") " pod="service-telemetry/stf-smoketest-smoke1-lkxrn" Jan 21 00:26:28 crc kubenswrapper[5118]: I0121 00:26:28.376256 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/5b6b7ed3-089a-4de8-a5b2-894a2ef33492-sensubility-config\") pod \"stf-smoketest-smoke1-lkxrn\" (UID: \"5b6b7ed3-089a-4de8-a5b2-894a2ef33492\") " pod="service-telemetry/stf-smoketest-smoke1-lkxrn" Jan 21 00:26:28 crc kubenswrapper[5118]: I0121 00:26:28.377260 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/5b6b7ed3-089a-4de8-a5b2-894a2ef33492-healthcheck-log\") pod \"stf-smoketest-smoke1-lkxrn\" (UID: \"5b6b7ed3-089a-4de8-a5b2-894a2ef33492\") " pod="service-telemetry/stf-smoketest-smoke1-lkxrn" Jan 21 00:26:28 crc kubenswrapper[5118]: I0121 00:26:28.378099 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/5b6b7ed3-089a-4de8-a5b2-894a2ef33492-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-lkxrn\" (UID: \"5b6b7ed3-089a-4de8-a5b2-894a2ef33492\") " pod="service-telemetry/stf-smoketest-smoke1-lkxrn" Jan 21 00:26:28 crc kubenswrapper[5118]: I0121 00:26:28.378174 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/5b6b7ed3-089a-4de8-a5b2-894a2ef33492-collectd-config\") pod \"stf-smoketest-smoke1-lkxrn\" (UID: \"5b6b7ed3-089a-4de8-a5b2-894a2ef33492\") " pod="service-telemetry/stf-smoketest-smoke1-lkxrn" Jan 21 00:26:28 crc kubenswrapper[5118]: I0121 00:26:28.378564 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/5b6b7ed3-089a-4de8-a5b2-894a2ef33492-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-lkxrn\" (UID: \"5b6b7ed3-089a-4de8-a5b2-894a2ef33492\") " pod="service-telemetry/stf-smoketest-smoke1-lkxrn" Jan 21 00:26:28 crc kubenswrapper[5118]: I0121 00:26:28.406665 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmvdm\" (UniqueName: \"kubernetes.io/projected/5b6b7ed3-089a-4de8-a5b2-894a2ef33492-kube-api-access-pmvdm\") pod \"stf-smoketest-smoke1-lkxrn\" (UID: \"5b6b7ed3-089a-4de8-a5b2-894a2ef33492\") " pod="service-telemetry/stf-smoketest-smoke1-lkxrn" Jan 21 00:26:28 crc kubenswrapper[5118]: I0121 00:26:28.488443 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-lkxrn" Jan 21 00:26:28 crc kubenswrapper[5118]: I0121 00:26:28.609186 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/curl"] Jan 21 00:26:28 crc kubenswrapper[5118]: I0121 00:26:28.615837 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Jan 21 00:26:28 crc kubenswrapper[5118]: I0121 00:26:28.621532 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/curl"] Jan 21 00:26:28 crc kubenswrapper[5118]: I0121 00:26:28.677228 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2v4bw\" (UniqueName: \"kubernetes.io/projected/1849c1bc-0d2d-4bb2-9a70-5d67a614c157-kube-api-access-2v4bw\") pod \"curl\" (UID: \"1849c1bc-0d2d-4bb2-9a70-5d67a614c157\") " pod="service-telemetry/curl" Jan 21 00:26:28 crc kubenswrapper[5118]: I0121 00:26:28.778678 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2v4bw\" (UniqueName: \"kubernetes.io/projected/1849c1bc-0d2d-4bb2-9a70-5d67a614c157-kube-api-access-2v4bw\") pod \"curl\" (UID: \"1849c1bc-0d2d-4bb2-9a70-5d67a614c157\") " pod="service-telemetry/curl" Jan 21 00:26:28 crc kubenswrapper[5118]: I0121 00:26:28.798298 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2v4bw\" (UniqueName: \"kubernetes.io/projected/1849c1bc-0d2d-4bb2-9a70-5d67a614c157-kube-api-access-2v4bw\") pod \"curl\" (UID: \"1849c1bc-0d2d-4bb2-9a70-5d67a614c157\") " pod="service-telemetry/curl" Jan 21 00:26:28 crc kubenswrapper[5118]: I0121 00:26:28.941459 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Jan 21 00:26:28 crc kubenswrapper[5118]: I0121 00:26:28.962480 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-lkxrn"] Jan 21 00:26:28 crc kubenswrapper[5118]: W0121 00:26:28.965847 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5b6b7ed3_089a_4de8_a5b2_894a2ef33492.slice/crio-60d3d556a4a8e167232d5489e7c0430bf487b62fbbbbbdb0d6997de406897686 WatchSource:0}: Error finding container 60d3d556a4a8e167232d5489e7c0430bf487b62fbbbbbdb0d6997de406897686: Status 404 returned error can't find the container with id 60d3d556a4a8e167232d5489e7c0430bf487b62fbbbbbdb0d6997de406897686 Jan 21 00:26:29 crc kubenswrapper[5118]: I0121 00:26:29.150704 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/curl"] Jan 21 00:26:29 crc kubenswrapper[5118]: W0121 00:26:29.156594 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1849c1bc_0d2d_4bb2_9a70_5d67a614c157.slice/crio-3b809b07c6eab2b44d49b3b01a5d9fb7ed31be689f51bd58bc75b483adb8efad WatchSource:0}: Error finding container 3b809b07c6eab2b44d49b3b01a5d9fb7ed31be689f51bd58bc75b483adb8efad: Status 404 returned error can't find the container with id 3b809b07c6eab2b44d49b3b01a5d9fb7ed31be689f51bd58bc75b483adb8efad Jan 21 00:26:29 crc kubenswrapper[5118]: I0121 00:26:29.862997 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"1849c1bc-0d2d-4bb2-9a70-5d67a614c157","Type":"ContainerStarted","Data":"3b809b07c6eab2b44d49b3b01a5d9fb7ed31be689f51bd58bc75b483adb8efad"} Jan 21 00:26:29 crc kubenswrapper[5118]: I0121 00:26:29.864643 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-lkxrn" event={"ID":"5b6b7ed3-089a-4de8-a5b2-894a2ef33492","Type":"ContainerStarted","Data":"60d3d556a4a8e167232d5489e7c0430bf487b62fbbbbbdb0d6997de406897686"} Jan 21 00:26:30 crc kubenswrapper[5118]: I0121 00:26:30.874476 5118 generic.go:358] "Generic (PLEG): container finished" podID="1849c1bc-0d2d-4bb2-9a70-5d67a614c157" containerID="fc29fd8bb8c02d779f2f490d9049eaeb8ad6ba38547b0470a7181a4bce9652a5" exitCode=0 Jan 21 00:26:30 crc kubenswrapper[5118]: I0121 00:26:30.874842 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"1849c1bc-0d2d-4bb2-9a70-5d67a614c157","Type":"ContainerDied","Data":"fc29fd8bb8c02d779f2f490d9049eaeb8ad6ba38547b0470a7181a4bce9652a5"} Jan 21 00:26:34 crc kubenswrapper[5118]: I0121 00:26:34.101988 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Jan 21 00:26:34 crc kubenswrapper[5118]: I0121 00:26:34.169987 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2v4bw\" (UniqueName: \"kubernetes.io/projected/1849c1bc-0d2d-4bb2-9a70-5d67a614c157-kube-api-access-2v4bw\") pod \"1849c1bc-0d2d-4bb2-9a70-5d67a614c157\" (UID: \"1849c1bc-0d2d-4bb2-9a70-5d67a614c157\") " Jan 21 00:26:34 crc kubenswrapper[5118]: I0121 00:26:34.194309 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1849c1bc-0d2d-4bb2-9a70-5d67a614c157-kube-api-access-2v4bw" (OuterVolumeSpecName: "kube-api-access-2v4bw") pod "1849c1bc-0d2d-4bb2-9a70-5d67a614c157" (UID: "1849c1bc-0d2d-4bb2-9a70-5d67a614c157"). InnerVolumeSpecName "kube-api-access-2v4bw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:26:34 crc kubenswrapper[5118]: I0121 00:26:34.271741 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2v4bw\" (UniqueName: \"kubernetes.io/projected/1849c1bc-0d2d-4bb2-9a70-5d67a614c157-kube-api-access-2v4bw\") on node \"crc\" DevicePath \"\"" Jan 21 00:26:34 crc kubenswrapper[5118]: I0121 00:26:34.320999 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_curl_1849c1bc-0d2d-4bb2-9a70-5d67a614c157/curl/0.log" Jan 21 00:26:34 crc kubenswrapper[5118]: I0121 00:26:34.631292 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-6774d8dfbc-t9sb4_7209940f-514b-4189-beeb-fbd77f7e6a15/prometheus-webhook-snmp/0.log" Jan 21 00:26:34 crc kubenswrapper[5118]: I0121 00:26:34.903176 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Jan 21 00:26:34 crc kubenswrapper[5118]: I0121 00:26:34.903211 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"1849c1bc-0d2d-4bb2-9a70-5d67a614c157","Type":"ContainerDied","Data":"3b809b07c6eab2b44d49b3b01a5d9fb7ed31be689f51bd58bc75b483adb8efad"} Jan 21 00:26:34 crc kubenswrapper[5118]: I0121 00:26:34.903268 5118 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b809b07c6eab2b44d49b3b01a5d9fb7ed31be689f51bd58bc75b483adb8efad" Jan 21 00:26:38 crc kubenswrapper[5118]: I0121 00:26:38.938307 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-lkxrn" event={"ID":"5b6b7ed3-089a-4de8-a5b2-894a2ef33492","Type":"ContainerStarted","Data":"f916704cb8a84f40a42281b1b6b4d1212726894337498d6faf27990b2f74d0c5"} Jan 21 00:26:45 crc kubenswrapper[5118]: I0121 00:26:45.001708 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-lkxrn" event={"ID":"5b6b7ed3-089a-4de8-a5b2-894a2ef33492","Type":"ContainerStarted","Data":"c37322fc4f91e0730d77833222cbd1fcb1f9538f02dcc54b3f56de627edcc324"} Jan 21 00:26:45 crc kubenswrapper[5118]: I0121 00:26:45.019812 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/stf-smoketest-smoke1-lkxrn" podStartSLOduration=1.809349474 podStartE2EDuration="17.019794986s" podCreationTimestamp="2026-01-21 00:26:28 +0000 UTC" firstStartedPulling="2026-01-21 00:26:28.968585376 +0000 UTC m=+1044.292832394" lastFinishedPulling="2026-01-21 00:26:44.179030888 +0000 UTC m=+1059.503277906" observedRunningTime="2026-01-21 00:26:45.017135045 +0000 UTC m=+1060.341382083" watchObservedRunningTime="2026-01-21 00:26:45.019794986 +0000 UTC m=+1060.344042014" Jan 21 00:27:03 crc kubenswrapper[5118]: I0121 00:27:03.800491 5118 patch_prober.go:28] interesting pod/machine-config-daemon-22r9n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 00:27:03 crc kubenswrapper[5118]: I0121 00:27:03.801159 5118 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 00:27:04 crc kubenswrapper[5118]: I0121 00:27:04.799001 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-6774d8dfbc-t9sb4_7209940f-514b-4189-beeb-fbd77f7e6a15/prometheus-webhook-snmp/0.log" Jan 21 00:27:06 crc kubenswrapper[5118]: I0121 00:27:06.298563 5118 scope.go:117] "RemoveContainer" containerID="37310520918d79172457c9b32c6b915c3a0f193abcc41c48a4cac8d81c85d580" Jan 21 00:27:13 crc kubenswrapper[5118]: I0121 00:27:13.232849 5118 generic.go:358] "Generic (PLEG): container finished" podID="5b6b7ed3-089a-4de8-a5b2-894a2ef33492" containerID="f916704cb8a84f40a42281b1b6b4d1212726894337498d6faf27990b2f74d0c5" exitCode=0 Jan 21 00:27:13 crc kubenswrapper[5118]: I0121 00:27:13.232960 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-lkxrn" event={"ID":"5b6b7ed3-089a-4de8-a5b2-894a2ef33492","Type":"ContainerDied","Data":"f916704cb8a84f40a42281b1b6b4d1212726894337498d6faf27990b2f74d0c5"} Jan 21 00:27:13 crc kubenswrapper[5118]: I0121 00:27:13.234065 5118 scope.go:117] "RemoveContainer" containerID="f916704cb8a84f40a42281b1b6b4d1212726894337498d6faf27990b2f74d0c5" Jan 21 00:27:16 crc kubenswrapper[5118]: I0121 00:27:16.260262 5118 generic.go:358] "Generic (PLEG): container finished" podID="5b6b7ed3-089a-4de8-a5b2-894a2ef33492" containerID="c37322fc4f91e0730d77833222cbd1fcb1f9538f02dcc54b3f56de627edcc324" exitCode=0 Jan 21 00:27:16 crc kubenswrapper[5118]: I0121 00:27:16.260350 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-lkxrn" event={"ID":"5b6b7ed3-089a-4de8-a5b2-894a2ef33492","Type":"ContainerDied","Data":"c37322fc4f91e0730d77833222cbd1fcb1f9538f02dcc54b3f56de627edcc324"} Jan 21 00:27:17 crc kubenswrapper[5118]: I0121 00:27:17.529538 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-lkxrn" Jan 21 00:27:17 crc kubenswrapper[5118]: I0121 00:27:17.696713 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/5b6b7ed3-089a-4de8-a5b2-894a2ef33492-ceilometer-entrypoint-script\") pod \"5b6b7ed3-089a-4de8-a5b2-894a2ef33492\" (UID: \"5b6b7ed3-089a-4de8-a5b2-894a2ef33492\") " Jan 21 00:27:17 crc kubenswrapper[5118]: I0121 00:27:17.697006 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/5b6b7ed3-089a-4de8-a5b2-894a2ef33492-healthcheck-log\") pod \"5b6b7ed3-089a-4de8-a5b2-894a2ef33492\" (UID: \"5b6b7ed3-089a-4de8-a5b2-894a2ef33492\") " Jan 21 00:27:17 crc kubenswrapper[5118]: I0121 00:27:17.697103 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/5b6b7ed3-089a-4de8-a5b2-894a2ef33492-ceilometer-publisher\") pod \"5b6b7ed3-089a-4de8-a5b2-894a2ef33492\" (UID: \"5b6b7ed3-089a-4de8-a5b2-894a2ef33492\") " Jan 21 00:27:17 crc kubenswrapper[5118]: I0121 00:27:17.697205 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmvdm\" (UniqueName: \"kubernetes.io/projected/5b6b7ed3-089a-4de8-a5b2-894a2ef33492-kube-api-access-pmvdm\") pod \"5b6b7ed3-089a-4de8-a5b2-894a2ef33492\" (UID: \"5b6b7ed3-089a-4de8-a5b2-894a2ef33492\") " Jan 21 00:27:17 crc kubenswrapper[5118]: I0121 00:27:17.697331 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/5b6b7ed3-089a-4de8-a5b2-894a2ef33492-sensubility-config\") pod \"5b6b7ed3-089a-4de8-a5b2-894a2ef33492\" (UID: \"5b6b7ed3-089a-4de8-a5b2-894a2ef33492\") " Jan 21 00:27:17 crc kubenswrapper[5118]: I0121 00:27:17.697450 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/5b6b7ed3-089a-4de8-a5b2-894a2ef33492-collectd-entrypoint-script\") pod \"5b6b7ed3-089a-4de8-a5b2-894a2ef33492\" (UID: \"5b6b7ed3-089a-4de8-a5b2-894a2ef33492\") " Jan 21 00:27:17 crc kubenswrapper[5118]: I0121 00:27:17.697554 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/5b6b7ed3-089a-4de8-a5b2-894a2ef33492-collectd-config\") pod \"5b6b7ed3-089a-4de8-a5b2-894a2ef33492\" (UID: \"5b6b7ed3-089a-4de8-a5b2-894a2ef33492\") " Jan 21 00:27:17 crc kubenswrapper[5118]: I0121 00:27:17.706899 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b6b7ed3-089a-4de8-a5b2-894a2ef33492-kube-api-access-pmvdm" (OuterVolumeSpecName: "kube-api-access-pmvdm") pod "5b6b7ed3-089a-4de8-a5b2-894a2ef33492" (UID: "5b6b7ed3-089a-4de8-a5b2-894a2ef33492"). InnerVolumeSpecName "kube-api-access-pmvdm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:27:17 crc kubenswrapper[5118]: I0121 00:27:17.718274 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b6b7ed3-089a-4de8-a5b2-894a2ef33492-collectd-config" (OuterVolumeSpecName: "collectd-config") pod "5b6b7ed3-089a-4de8-a5b2-894a2ef33492" (UID: "5b6b7ed3-089a-4de8-a5b2-894a2ef33492"). InnerVolumeSpecName "collectd-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 00:27:17 crc kubenswrapper[5118]: I0121 00:27:17.719965 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b6b7ed3-089a-4de8-a5b2-894a2ef33492-collectd-entrypoint-script" (OuterVolumeSpecName: "collectd-entrypoint-script") pod "5b6b7ed3-089a-4de8-a5b2-894a2ef33492" (UID: "5b6b7ed3-089a-4de8-a5b2-894a2ef33492"). InnerVolumeSpecName "collectd-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 00:27:17 crc kubenswrapper[5118]: I0121 00:27:17.720590 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b6b7ed3-089a-4de8-a5b2-894a2ef33492-sensubility-config" (OuterVolumeSpecName: "sensubility-config") pod "5b6b7ed3-089a-4de8-a5b2-894a2ef33492" (UID: "5b6b7ed3-089a-4de8-a5b2-894a2ef33492"). InnerVolumeSpecName "sensubility-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 00:27:17 crc kubenswrapper[5118]: I0121 00:27:17.721058 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b6b7ed3-089a-4de8-a5b2-894a2ef33492-ceilometer-entrypoint-script" (OuterVolumeSpecName: "ceilometer-entrypoint-script") pod "5b6b7ed3-089a-4de8-a5b2-894a2ef33492" (UID: "5b6b7ed3-089a-4de8-a5b2-894a2ef33492"). InnerVolumeSpecName "ceilometer-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 00:27:17 crc kubenswrapper[5118]: I0121 00:27:17.725056 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b6b7ed3-089a-4de8-a5b2-894a2ef33492-ceilometer-publisher" (OuterVolumeSpecName: "ceilometer-publisher") pod "5b6b7ed3-089a-4de8-a5b2-894a2ef33492" (UID: "5b6b7ed3-089a-4de8-a5b2-894a2ef33492"). InnerVolumeSpecName "ceilometer-publisher". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 00:27:17 crc kubenswrapper[5118]: I0121 00:27:17.729074 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b6b7ed3-089a-4de8-a5b2-894a2ef33492-healthcheck-log" (OuterVolumeSpecName: "healthcheck-log") pod "5b6b7ed3-089a-4de8-a5b2-894a2ef33492" (UID: "5b6b7ed3-089a-4de8-a5b2-894a2ef33492"). InnerVolumeSpecName "healthcheck-log". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 00:27:17 crc kubenswrapper[5118]: I0121 00:27:17.799390 5118 reconciler_common.go:299] "Volume detached for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/5b6b7ed3-089a-4de8-a5b2-894a2ef33492-collectd-entrypoint-script\") on node \"crc\" DevicePath \"\"" Jan 21 00:27:17 crc kubenswrapper[5118]: I0121 00:27:17.799664 5118 reconciler_common.go:299] "Volume detached for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/5b6b7ed3-089a-4de8-a5b2-894a2ef33492-collectd-config\") on node \"crc\" DevicePath \"\"" Jan 21 00:27:17 crc kubenswrapper[5118]: I0121 00:27:17.799674 5118 reconciler_common.go:299] "Volume detached for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/5b6b7ed3-089a-4de8-a5b2-894a2ef33492-ceilometer-entrypoint-script\") on node \"crc\" DevicePath \"\"" Jan 21 00:27:17 crc kubenswrapper[5118]: I0121 00:27:17.799682 5118 reconciler_common.go:299] "Volume detached for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/5b6b7ed3-089a-4de8-a5b2-894a2ef33492-healthcheck-log\") on node \"crc\" DevicePath \"\"" Jan 21 00:27:17 crc kubenswrapper[5118]: I0121 00:27:17.799691 5118 reconciler_common.go:299] "Volume detached for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/5b6b7ed3-089a-4de8-a5b2-894a2ef33492-ceilometer-publisher\") on node \"crc\" DevicePath \"\"" Jan 21 00:27:17 crc kubenswrapper[5118]: I0121 00:27:17.799701 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pmvdm\" (UniqueName: \"kubernetes.io/projected/5b6b7ed3-089a-4de8-a5b2-894a2ef33492-kube-api-access-pmvdm\") on node \"crc\" DevicePath \"\"" Jan 21 00:27:17 crc kubenswrapper[5118]: I0121 00:27:17.799708 5118 reconciler_common.go:299] "Volume detached for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/5b6b7ed3-089a-4de8-a5b2-894a2ef33492-sensubility-config\") on node \"crc\" DevicePath \"\"" Jan 21 00:27:18 crc kubenswrapper[5118]: I0121 00:27:18.289426 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-lkxrn" Jan 21 00:27:18 crc kubenswrapper[5118]: I0121 00:27:18.289468 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-lkxrn" event={"ID":"5b6b7ed3-089a-4de8-a5b2-894a2ef33492","Type":"ContainerDied","Data":"60d3d556a4a8e167232d5489e7c0430bf487b62fbbbbbdb0d6997de406897686"} Jan 21 00:27:18 crc kubenswrapper[5118]: I0121 00:27:18.289513 5118 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60d3d556a4a8e167232d5489e7c0430bf487b62fbbbbbdb0d6997de406897686" Jan 21 00:27:19 crc kubenswrapper[5118]: I0121 00:27:19.498224 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-lkxrn_5b6b7ed3-089a-4de8-a5b2-894a2ef33492/smoketest-collectd/0.log" Jan 21 00:27:19 crc kubenswrapper[5118]: I0121 00:27:19.750004 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-lkxrn_5b6b7ed3-089a-4de8-a5b2-894a2ef33492/smoketest-ceilometer/0.log" Jan 21 00:27:20 crc kubenswrapper[5118]: I0121 00:27:20.021318 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-interconnect-55bf8d5cb-xprn5_a46b6cc9-3c89-49ba-8687-5e3eecdee283/default-interconnect/0.log" Jan 21 00:27:20 crc kubenswrapper[5118]: I0121 00:27:20.281709 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-787645d794-879kb_24b1ba21-7cfb-4bdc-84e7-63e5bacee435/bridge/1.log" Jan 21 00:27:20 crc kubenswrapper[5118]: I0121 00:27:20.542071 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-787645d794-879kb_24b1ba21-7cfb-4bdc-84e7-63e5bacee435/sg-core/0.log" Jan 21 00:27:20 crc kubenswrapper[5118]: I0121 00:27:20.836115 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-7f9f666f58-z4s4h_acab55a4-1334-4e6a-9160-0693a38de48d/bridge/1.log" Jan 21 00:27:21 crc kubenswrapper[5118]: I0121 00:27:21.080575 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-7f9f666f58-z4s4h_acab55a4-1334-4e6a-9160-0693a38de48d/sg-core/0.log" Jan 21 00:27:21 crc kubenswrapper[5118]: I0121 00:27:21.324075 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-545b564d9f-tp7hs_26e412ff-6455-444c-a91c-350651a82800/bridge/1.log" Jan 21 00:27:21 crc kubenswrapper[5118]: I0121 00:27:21.573150 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-545b564d9f-tp7hs_26e412ff-6455-444c-a91c-350651a82800/sg-core/0.log" Jan 21 00:27:21 crc kubenswrapper[5118]: I0121 00:27:21.814123 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-98d47fbd5-w8mnl_1eae1c63-1b25-4f53-83e6-cc1fcd7cf325/bridge/1.log" Jan 21 00:27:22 crc kubenswrapper[5118]: I0121 00:27:22.066810 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-98d47fbd5-w8mnl_1eae1c63-1b25-4f53-83e6-cc1fcd7cf325/sg-core/0.log" Jan 21 00:27:22 crc kubenswrapper[5118]: I0121 00:27:22.314356 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pjrqv_d85a2d17-ce06-446d-aabc-cc02486c78eb/bridge/1.log" Jan 21 00:27:22 crc kubenswrapper[5118]: I0121 00:27:22.559043 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pjrqv_d85a2d17-ce06-446d-aabc-cc02486c78eb/sg-core/0.log" Jan 21 00:27:24 crc kubenswrapper[5118]: I0121 00:27:24.505309 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-97b85656c-nzxxg_83839c7f-0d0d-41f1-83bf-77a677ceb327/operator/0.log" Jan 21 00:27:24 crc kubenswrapper[5118]: I0121 00:27:24.869059 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-default-0_ee3806b6-31c0-470c-8dcf-9f7e40a5929a/prometheus/0.log" Jan 21 00:27:25 crc kubenswrapper[5118]: I0121 00:27:25.141629 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_elasticsearch-es-default-0_3a83d7d0-ad82-4da4-8c10-269310b2e144/elasticsearch/0.log" Jan 21 00:27:25 crc kubenswrapper[5118]: I0121 00:27:25.409582 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-6774d8dfbc-t9sb4_7209940f-514b-4189-beeb-fbd77f7e6a15/prometheus-webhook-snmp/0.log" Jan 21 00:27:25 crc kubenswrapper[5118]: I0121 00:27:25.668767 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_alertmanager-default-0_7ce5576f-ce16-4a13-9cca-8fcc68e399e7/alertmanager/0.log" Jan 21 00:27:33 crc kubenswrapper[5118]: I0121 00:27:33.800510 5118 patch_prober.go:28] interesting pod/machine-config-daemon-22r9n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 00:27:33 crc kubenswrapper[5118]: I0121 00:27:33.801099 5118 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 00:27:38 crc kubenswrapper[5118]: I0121 00:27:38.452734 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-794b5697c7-z99rt_a691b713-173e-4931-85e9-1510e1a0ee6a/operator/0.log" Jan 21 00:27:40 crc kubenswrapper[5118]: I0121 00:27:40.355793 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-97b85656c-nzxxg_83839c7f-0d0d-41f1-83bf-77a677ceb327/operator/0.log" Jan 21 00:27:40 crc kubenswrapper[5118]: I0121 00:27:40.623815 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_qdr-test_b3c73688-0f77-4726-85bc-6a81cf3ff214/qdr/0.log" Jan 21 00:28:00 crc kubenswrapper[5118]: I0121 00:28:00.172179 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29482588-z9j84"] Jan 21 00:28:00 crc kubenswrapper[5118]: I0121 00:28:00.174807 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5b6b7ed3-089a-4de8-a5b2-894a2ef33492" containerName="smoketest-ceilometer" Jan 21 00:28:00 crc kubenswrapper[5118]: I0121 00:28:00.174931 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b6b7ed3-089a-4de8-a5b2-894a2ef33492" containerName="smoketest-ceilometer" Jan 21 00:28:00 crc kubenswrapper[5118]: I0121 00:28:00.175046 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1849c1bc-0d2d-4bb2-9a70-5d67a614c157" containerName="curl" Jan 21 00:28:00 crc kubenswrapper[5118]: I0121 00:28:00.175133 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="1849c1bc-0d2d-4bb2-9a70-5d67a614c157" containerName="curl" Jan 21 00:28:00 crc kubenswrapper[5118]: I0121 00:28:00.175235 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5b6b7ed3-089a-4de8-a5b2-894a2ef33492" containerName="smoketest-collectd" Jan 21 00:28:00 crc kubenswrapper[5118]: I0121 00:28:00.175328 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b6b7ed3-089a-4de8-a5b2-894a2ef33492" containerName="smoketest-collectd" Jan 21 00:28:00 crc kubenswrapper[5118]: I0121 00:28:00.175612 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="5b6b7ed3-089a-4de8-a5b2-894a2ef33492" containerName="smoketest-ceilometer" Jan 21 00:28:00 crc kubenswrapper[5118]: I0121 00:28:00.175717 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="1849c1bc-0d2d-4bb2-9a70-5d67a614c157" containerName="curl" Jan 21 00:28:00 crc kubenswrapper[5118]: I0121 00:28:00.175796 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="5b6b7ed3-089a-4de8-a5b2-894a2ef33492" containerName="smoketest-collectd" Jan 21 00:28:00 crc kubenswrapper[5118]: I0121 00:28:00.189734 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29482588-z9j84"] Jan 21 00:28:00 crc kubenswrapper[5118]: I0121 00:28:00.189908 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482588-z9j84" Jan 21 00:28:00 crc kubenswrapper[5118]: I0121 00:28:00.195403 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-tk7qc\"" Jan 21 00:28:00 crc kubenswrapper[5118]: I0121 00:28:00.195654 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 00:28:00 crc kubenswrapper[5118]: I0121 00:28:00.196228 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 00:28:00 crc kubenswrapper[5118]: I0121 00:28:00.268823 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdt6l\" (UniqueName: \"kubernetes.io/projected/77fa341b-7377-4da6-b48c-f4e1d7c98f79-kube-api-access-fdt6l\") pod \"auto-csr-approver-29482588-z9j84\" (UID: \"77fa341b-7377-4da6-b48c-f4e1d7c98f79\") " pod="openshift-infra/auto-csr-approver-29482588-z9j84" Jan 21 00:28:00 crc kubenswrapper[5118]: I0121 00:28:00.370851 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fdt6l\" (UniqueName: \"kubernetes.io/projected/77fa341b-7377-4da6-b48c-f4e1d7c98f79-kube-api-access-fdt6l\") pod \"auto-csr-approver-29482588-z9j84\" (UID: \"77fa341b-7377-4da6-b48c-f4e1d7c98f79\") " pod="openshift-infra/auto-csr-approver-29482588-z9j84" Jan 21 00:28:00 crc kubenswrapper[5118]: I0121 00:28:00.391009 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdt6l\" (UniqueName: \"kubernetes.io/projected/77fa341b-7377-4da6-b48c-f4e1d7c98f79-kube-api-access-fdt6l\") pod \"auto-csr-approver-29482588-z9j84\" (UID: \"77fa341b-7377-4da6-b48c-f4e1d7c98f79\") " pod="openshift-infra/auto-csr-approver-29482588-z9j84" Jan 21 00:28:00 crc kubenswrapper[5118]: I0121 00:28:00.535522 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482588-z9j84" Jan 21 00:28:00 crc kubenswrapper[5118]: I0121 00:28:00.822457 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29482588-z9j84"] Jan 21 00:28:01 crc kubenswrapper[5118]: I0121 00:28:01.654990 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29482588-z9j84" event={"ID":"77fa341b-7377-4da6-b48c-f4e1d7c98f79","Type":"ContainerStarted","Data":"04d06f71a52cc3e5e4688c016fbcef9da96d157ed79e478f0790f59bde9adf4c"} Jan 21 00:28:02 crc kubenswrapper[5118]: I0121 00:28:02.662787 5118 generic.go:358] "Generic (PLEG): container finished" podID="77fa341b-7377-4da6-b48c-f4e1d7c98f79" containerID="199adca0646d99849c89aa14eeb02e5709d11a30a55f26dfb2c51c6032ab455d" exitCode=0 Jan 21 00:28:02 crc kubenswrapper[5118]: I0121 00:28:02.662878 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29482588-z9j84" event={"ID":"77fa341b-7377-4da6-b48c-f4e1d7c98f79","Type":"ContainerDied","Data":"199adca0646d99849c89aa14eeb02e5709d11a30a55f26dfb2c51c6032ab455d"} Jan 21 00:28:03 crc kubenswrapper[5118]: I0121 00:28:03.801282 5118 patch_prober.go:28] interesting pod/machine-config-daemon-22r9n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 00:28:03 crc kubenswrapper[5118]: I0121 00:28:03.801724 5118 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 00:28:03 crc kubenswrapper[5118]: I0121 00:28:03.801793 5118 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" Jan 21 00:28:03 crc kubenswrapper[5118]: I0121 00:28:03.802822 5118 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"61c6c2137480cc175302c2e82d6bbb9c15151d1ca8b8cb9acea1f49282d3488a"} pod="openshift-machine-config-operator/machine-config-daemon-22r9n" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 00:28:03 crc kubenswrapper[5118]: I0121 00:28:03.802929 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" containerName="machine-config-daemon" containerID="cri-o://61c6c2137480cc175302c2e82d6bbb9c15151d1ca8b8cb9acea1f49282d3488a" gracePeriod=600 Jan 21 00:28:03 crc kubenswrapper[5118]: I0121 00:28:03.953739 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482588-z9j84" Jan 21 00:28:04 crc kubenswrapper[5118]: I0121 00:28:04.130661 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fdt6l\" (UniqueName: \"kubernetes.io/projected/77fa341b-7377-4da6-b48c-f4e1d7c98f79-kube-api-access-fdt6l\") pod \"77fa341b-7377-4da6-b48c-f4e1d7c98f79\" (UID: \"77fa341b-7377-4da6-b48c-f4e1d7c98f79\") " Jan 21 00:28:04 crc kubenswrapper[5118]: I0121 00:28:04.137128 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77fa341b-7377-4da6-b48c-f4e1d7c98f79-kube-api-access-fdt6l" (OuterVolumeSpecName: "kube-api-access-fdt6l") pod "77fa341b-7377-4da6-b48c-f4e1d7c98f79" (UID: "77fa341b-7377-4da6-b48c-f4e1d7c98f79"). InnerVolumeSpecName "kube-api-access-fdt6l". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:28:04 crc kubenswrapper[5118]: I0121 00:28:04.234046 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fdt6l\" (UniqueName: \"kubernetes.io/projected/77fa341b-7377-4da6-b48c-f4e1d7c98f79-kube-api-access-fdt6l\") on node \"crc\" DevicePath \"\"" Jan 21 00:28:04 crc kubenswrapper[5118]: I0121 00:28:04.682132 5118 generic.go:358] "Generic (PLEG): container finished" podID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" containerID="61c6c2137480cc175302c2e82d6bbb9c15151d1ca8b8cb9acea1f49282d3488a" exitCode=0 Jan 21 00:28:04 crc kubenswrapper[5118]: I0121 00:28:04.682202 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" event={"ID":"44eb9bc7-60a3-421c-bf5e-d1d9a5026435","Type":"ContainerDied","Data":"61c6c2137480cc175302c2e82d6bbb9c15151d1ca8b8cb9acea1f49282d3488a"} Jan 21 00:28:04 crc kubenswrapper[5118]: I0121 00:28:04.682529 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" event={"ID":"44eb9bc7-60a3-421c-bf5e-d1d9a5026435","Type":"ContainerStarted","Data":"197e7e77ca3ea92693e0cbb821721a50117819efe533cb1e47ad37d07b7e056e"} Jan 21 00:28:04 crc kubenswrapper[5118]: I0121 00:28:04.682551 5118 scope.go:117] "RemoveContainer" containerID="f02b486c7f526cea45f0bc8e93498ac542cc749b10fbe7b2dc9e854f825b1f31" Jan 21 00:28:04 crc kubenswrapper[5118]: I0121 00:28:04.685376 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482588-z9j84" Jan 21 00:28:04 crc kubenswrapper[5118]: I0121 00:28:04.685476 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29482588-z9j84" event={"ID":"77fa341b-7377-4da6-b48c-f4e1d7c98f79","Type":"ContainerDied","Data":"04d06f71a52cc3e5e4688c016fbcef9da96d157ed79e478f0790f59bde9adf4c"} Jan 21 00:28:04 crc kubenswrapper[5118]: I0121 00:28:04.685510 5118 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="04d06f71a52cc3e5e4688c016fbcef9da96d157ed79e478f0790f59bde9adf4c" Jan 21 00:28:05 crc kubenswrapper[5118]: I0121 00:28:05.023025 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29482582-t6ttq"] Jan 21 00:28:05 crc kubenswrapper[5118]: I0121 00:28:05.027737 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29482582-t6ttq"] Jan 21 00:28:06 crc kubenswrapper[5118]: I0121 00:28:06.418758 5118 scope.go:117] "RemoveContainer" containerID="3c8520506a654fa2f943175398e6675ee5a10a83d889409adccf5b4d3cd4a87f" Jan 21 00:28:06 crc kubenswrapper[5118]: I0121 00:28:06.444523 5118 scope.go:117] "RemoveContainer" containerID="2651e472c2c8b7f967145e4d0591618a122d65d77d4f1de9e128b680327ab074" Jan 21 00:28:06 crc kubenswrapper[5118]: I0121 00:28:06.498865 5118 scope.go:117] "RemoveContainer" containerID="2b8f9325627489cedd0c26d21a0aa9519d8f4ac6e734e293fed59b09e2efdc8d" Jan 21 00:28:06 crc kubenswrapper[5118]: I0121 00:28:06.993012 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f097a29-16a1-4f56-873e-7bdd4ee1e659" path="/var/lib/kubelet/pods/0f097a29-16a1-4f56-873e-7bdd4ee1e659/volumes" Jan 21 00:28:16 crc kubenswrapper[5118]: I0121 00:28:16.863957 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-must-gather-sghhc/must-gather-t6csk"] Jan 21 00:28:16 crc kubenswrapper[5118]: I0121 00:28:16.867471 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="77fa341b-7377-4da6-b48c-f4e1d7c98f79" containerName="oc" Jan 21 00:28:16 crc kubenswrapper[5118]: I0121 00:28:16.867895 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="77fa341b-7377-4da6-b48c-f4e1d7c98f79" containerName="oc" Jan 21 00:28:16 crc kubenswrapper[5118]: I0121 00:28:16.868265 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="77fa341b-7377-4da6-b48c-f4e1d7c98f79" containerName="oc" Jan 21 00:28:16 crc kubenswrapper[5118]: I0121 00:28:16.883054 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sghhc/must-gather-t6csk" Jan 21 00:28:16 crc kubenswrapper[5118]: I0121 00:28:16.884248 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-sghhc/must-gather-t6csk"] Jan 21 00:28:16 crc kubenswrapper[5118]: I0121 00:28:16.887479 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-sghhc\"/\"openshift-service-ca.crt\"" Jan 21 00:28:16 crc kubenswrapper[5118]: I0121 00:28:16.889873 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-sghhc\"/\"kube-root-ca.crt\"" Jan 21 00:28:16 crc kubenswrapper[5118]: I0121 00:28:16.964605 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlk7l\" (UniqueName: \"kubernetes.io/projected/dac8a762-dcdd-4407-ab19-49554b2ced20-kube-api-access-xlk7l\") pod \"must-gather-t6csk\" (UID: \"dac8a762-dcdd-4407-ab19-49554b2ced20\") " pod="openshift-must-gather-sghhc/must-gather-t6csk" Jan 21 00:28:16 crc kubenswrapper[5118]: I0121 00:28:16.964686 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/dac8a762-dcdd-4407-ab19-49554b2ced20-must-gather-output\") pod \"must-gather-t6csk\" (UID: \"dac8a762-dcdd-4407-ab19-49554b2ced20\") " pod="openshift-must-gather-sghhc/must-gather-t6csk" Jan 21 00:28:17 crc kubenswrapper[5118]: I0121 00:28:17.066197 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xlk7l\" (UniqueName: \"kubernetes.io/projected/dac8a762-dcdd-4407-ab19-49554b2ced20-kube-api-access-xlk7l\") pod \"must-gather-t6csk\" (UID: \"dac8a762-dcdd-4407-ab19-49554b2ced20\") " pod="openshift-must-gather-sghhc/must-gather-t6csk" Jan 21 00:28:17 crc kubenswrapper[5118]: I0121 00:28:17.066499 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/dac8a762-dcdd-4407-ab19-49554b2ced20-must-gather-output\") pod \"must-gather-t6csk\" (UID: \"dac8a762-dcdd-4407-ab19-49554b2ced20\") " pod="openshift-must-gather-sghhc/must-gather-t6csk" Jan 21 00:28:17 crc kubenswrapper[5118]: I0121 00:28:17.067078 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/dac8a762-dcdd-4407-ab19-49554b2ced20-must-gather-output\") pod \"must-gather-t6csk\" (UID: \"dac8a762-dcdd-4407-ab19-49554b2ced20\") " pod="openshift-must-gather-sghhc/must-gather-t6csk" Jan 21 00:28:17 crc kubenswrapper[5118]: I0121 00:28:17.097451 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlk7l\" (UniqueName: \"kubernetes.io/projected/dac8a762-dcdd-4407-ab19-49554b2ced20-kube-api-access-xlk7l\") pod \"must-gather-t6csk\" (UID: \"dac8a762-dcdd-4407-ab19-49554b2ced20\") " pod="openshift-must-gather-sghhc/must-gather-t6csk" Jan 21 00:28:17 crc kubenswrapper[5118]: I0121 00:28:17.199332 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sghhc/must-gather-t6csk" Jan 21 00:28:17 crc kubenswrapper[5118]: I0121 00:28:17.653984 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-sghhc/must-gather-t6csk"] Jan 21 00:28:17 crc kubenswrapper[5118]: I0121 00:28:17.815921 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-sghhc/must-gather-t6csk" event={"ID":"dac8a762-dcdd-4407-ab19-49554b2ced20","Type":"ContainerStarted","Data":"41a297712a9fa0a0c6883136dd6ad37ea54d103fae67723e7fd18ab42224dc64"} Jan 21 00:28:23 crc kubenswrapper[5118]: I0121 00:28:23.882175 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-sghhc/must-gather-t6csk" event={"ID":"dac8a762-dcdd-4407-ab19-49554b2ced20","Type":"ContainerStarted","Data":"229066ed8d264067fcf158f467ac666ea9c5fa460ea25ae1ba084ecdfaf12166"} Jan 21 00:28:23 crc kubenswrapper[5118]: I0121 00:28:23.882834 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-sghhc/must-gather-t6csk" event={"ID":"dac8a762-dcdd-4407-ab19-49554b2ced20","Type":"ContainerStarted","Data":"33708008c6e2fe929c5072060363071fc5a9cd22227d87d5111dd7b972cf9949"} Jan 21 00:28:23 crc kubenswrapper[5118]: I0121 00:28:23.910840 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-sghhc/must-gather-t6csk" podStartSLOduration=2.9146992640000002 podStartE2EDuration="7.910819418s" podCreationTimestamp="2026-01-21 00:28:16 +0000 UTC" firstStartedPulling="2026-01-21 00:28:17.669558774 +0000 UTC m=+1152.993805792" lastFinishedPulling="2026-01-21 00:28:22.665678908 +0000 UTC m=+1157.989925946" observedRunningTime="2026-01-21 00:28:23.906397779 +0000 UTC m=+1159.230644807" watchObservedRunningTime="2026-01-21 00:28:23.910819418 +0000 UTC m=+1159.235066436" Jan 21 00:28:35 crc kubenswrapper[5118]: I0121 00:28:35.585019 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-75ffdb6fcd-nkx6r_271e0654-9d86-4ec1-8c25-d345a8a1eb0a/control-plane-machine-set-operator/0.log" Jan 21 00:28:35 crc kubenswrapper[5118]: I0121 00:28:35.604959 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-755bb95488-r2gm9_1202d380-a207-455c-8bd8-2b82e7974afa/kube-rbac-proxy/0.log" Jan 21 00:28:35 crc kubenswrapper[5118]: I0121 00:28:35.614504 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-755bb95488-r2gm9_1202d380-a207-455c-8bd8-2b82e7974afa/machine-api-operator/0.log" Jan 21 00:28:40 crc kubenswrapper[5118]: I0121 00:28:40.469821 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858d87f86b-rhl9w_afeab2f1-ad2a-4d1e-915e-6dbd338641e5/cert-manager-controller/0.log" Jan 21 00:28:40 crc kubenswrapper[5118]: I0121 00:28:40.486087 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7dbf76d5c8-7cfzv_54dd7013-936e-44cb-92df-0e4ed02dd3ba/cert-manager-cainjector/0.log" Jan 21 00:28:40 crc kubenswrapper[5118]: I0121 00:28:40.503998 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-7894b5b9b4-q9qmg_a1ed4705-4ddb-4ee6-bce2-c8b90c8a459b/cert-manager-webhook/0.log" Jan 21 00:28:45 crc kubenswrapper[5118]: I0121 00:28:45.543047 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-9bc85b4bf-6mrsj_3b71239b-0442-4a3d-9df1-d0c8727f356b/prometheus-operator/0.log" Jan 21 00:28:45 crc kubenswrapper[5118]: I0121 00:28:45.554027 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-77696d8df9-jdldg_c8f5c4be-4fd8-4dde-b8da-13d367a1ca0a/prometheus-operator-admission-webhook/0.log" Jan 21 00:28:45 crc kubenswrapper[5118]: I0121 00:28:45.571428 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-77696d8df9-rn2lf_66602a58-82f5-428b-8473-2f3e878d94e5/prometheus-operator-admission-webhook/0.log" Jan 21 00:28:45 crc kubenswrapper[5118]: I0121 00:28:45.600123 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-85c68dddb-f76dw_a82d3afe-1c85-447e-8430-14b7b3aa4780/operator/0.log" Jan 21 00:28:45 crc kubenswrapper[5118]: I0121 00:28:45.615960 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-669c9f96b5-lcwrz_a9b6f709-1ac0-463b-be90-11b3065eb4d9/perses-operator/0.log" Jan 21 00:28:50 crc kubenswrapper[5118]: I0121 00:28:50.605255 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931axktn7_403f2683-0efe-4220-b481-fd8ec6a89da0/extract/0.log" Jan 21 00:28:50 crc kubenswrapper[5118]: I0121 00:28:50.613825 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931axktn7_403f2683-0efe-4220-b481-fd8ec6a89da0/util/0.log" Jan 21 00:28:50 crc kubenswrapper[5118]: I0121 00:28:50.653466 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931axktn7_403f2683-0efe-4220-b481-fd8ec6a89da0/pull/0.log" Jan 21 00:28:50 crc kubenswrapper[5118]: I0121 00:28:50.663393 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwtrnd_259b3b29-d29f-46b7-8808-75e572aadf9f/extract/0.log" Jan 21 00:28:50 crc kubenswrapper[5118]: I0121 00:28:50.676304 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwtrnd_259b3b29-d29f-46b7-8808-75e572aadf9f/util/0.log" Jan 21 00:28:50 crc kubenswrapper[5118]: I0121 00:28:50.685993 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwtrnd_259b3b29-d29f-46b7-8808-75e572aadf9f/pull/0.log" Jan 21 00:28:50 crc kubenswrapper[5118]: I0121 00:28:50.701849 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e698gt_1d7805d7-a48a-488e-9f51-715cd1e444bf/extract/0.log" Jan 21 00:28:50 crc kubenswrapper[5118]: I0121 00:28:50.712142 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e698gt_1d7805d7-a48a-488e-9f51-715cd1e444bf/util/0.log" Jan 21 00:28:50 crc kubenswrapper[5118]: I0121 00:28:50.720310 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e698gt_1d7805d7-a48a-488e-9f51-715cd1e444bf/pull/0.log" Jan 21 00:28:50 crc kubenswrapper[5118]: I0121 00:28:50.732553 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08g8vzk_5435bf24-0656-4edc-aa9a-9475d2cb648d/extract/0.log" Jan 21 00:28:50 crc kubenswrapper[5118]: I0121 00:28:50.741179 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08g8vzk_5435bf24-0656-4edc-aa9a-9475d2cb648d/util/0.log" Jan 21 00:28:50 crc kubenswrapper[5118]: I0121 00:28:50.751065 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08g8vzk_5435bf24-0656-4edc-aa9a-9475d2cb648d/pull/0.log" Jan 21 00:28:50 crc kubenswrapper[5118]: I0121 00:28:50.913535 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-gptdw_c35d8860-c9f7-468d-9832-45b92d9d6e1c/registry-server/0.log" Jan 21 00:28:50 crc kubenswrapper[5118]: I0121 00:28:50.918184 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-gptdw_c35d8860-c9f7-468d-9832-45b92d9d6e1c/extract-utilities/0.log" Jan 21 00:28:50 crc kubenswrapper[5118]: I0121 00:28:50.925580 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-gptdw_c35d8860-c9f7-468d-9832-45b92d9d6e1c/extract-content/0.log" Jan 21 00:28:51 crc kubenswrapper[5118]: I0121 00:28:51.245338 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-45trt_daccdf66-85c0-49b6-a857-638d2b782a9a/registry-server/0.log" Jan 21 00:28:51 crc kubenswrapper[5118]: I0121 00:28:51.250288 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-45trt_daccdf66-85c0-49b6-a857-638d2b782a9a/extract-utilities/0.log" Jan 21 00:28:51 crc kubenswrapper[5118]: I0121 00:28:51.258276 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-45trt_daccdf66-85c0-49b6-a857-638d2b782a9a/extract-content/0.log" Jan 21 00:28:51 crc kubenswrapper[5118]: I0121 00:28:51.273655 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-547dbd544d-4wgxd_602c053c-5e99-4f10-888b-0ea7a740a476/marketplace-operator/0.log" Jan 21 00:28:51 crc kubenswrapper[5118]: I0121 00:28:51.496976 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-cgwmg_684a88a4-f9ff-4495-85b9-499e70a2d8b4/registry-server/0.log" Jan 21 00:28:51 crc kubenswrapper[5118]: I0121 00:28:51.501573 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-cgwmg_684a88a4-f9ff-4495-85b9-499e70a2d8b4/extract-utilities/0.log" Jan 21 00:28:51 crc kubenswrapper[5118]: I0121 00:28:51.508498 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-cgwmg_684a88a4-f9ff-4495-85b9-499e70a2d8b4/extract-content/0.log" Jan 21 00:28:55 crc kubenswrapper[5118]: I0121 00:28:55.188761 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-9bc85b4bf-6mrsj_3b71239b-0442-4a3d-9df1-d0c8727f356b/prometheus-operator/0.log" Jan 21 00:28:55 crc kubenswrapper[5118]: I0121 00:28:55.198874 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-77696d8df9-jdldg_c8f5c4be-4fd8-4dde-b8da-13d367a1ca0a/prometheus-operator-admission-webhook/0.log" Jan 21 00:28:55 crc kubenswrapper[5118]: I0121 00:28:55.212640 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-77696d8df9-rn2lf_66602a58-82f5-428b-8473-2f3e878d94e5/prometheus-operator-admission-webhook/0.log" Jan 21 00:28:55 crc kubenswrapper[5118]: I0121 00:28:55.227996 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-85c68dddb-f76dw_a82d3afe-1c85-447e-8430-14b7b3aa4780/operator/0.log" Jan 21 00:28:55 crc kubenswrapper[5118]: I0121 00:28:55.250120 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-669c9f96b5-lcwrz_a9b6f709-1ac0-463b-be90-11b3065eb4d9/perses-operator/0.log" Jan 21 00:29:03 crc kubenswrapper[5118]: I0121 00:29:03.446947 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-9bc85b4bf-6mrsj_3b71239b-0442-4a3d-9df1-d0c8727f356b/prometheus-operator/0.log" Jan 21 00:29:03 crc kubenswrapper[5118]: I0121 00:29:03.459338 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-77696d8df9-jdldg_c8f5c4be-4fd8-4dde-b8da-13d367a1ca0a/prometheus-operator-admission-webhook/0.log" Jan 21 00:29:03 crc kubenswrapper[5118]: I0121 00:29:03.476683 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-77696d8df9-rn2lf_66602a58-82f5-428b-8473-2f3e878d94e5/prometheus-operator-admission-webhook/0.log" Jan 21 00:29:03 crc kubenswrapper[5118]: I0121 00:29:03.502647 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-85c68dddb-f76dw_a82d3afe-1c85-447e-8430-14b7b3aa4780/operator/0.log" Jan 21 00:29:03 crc kubenswrapper[5118]: I0121 00:29:03.511290 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-669c9f96b5-lcwrz_a9b6f709-1ac0-463b-be90-11b3065eb4d9/perses-operator/0.log" Jan 21 00:29:03 crc kubenswrapper[5118]: I0121 00:29:03.630913 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858d87f86b-rhl9w_afeab2f1-ad2a-4d1e-915e-6dbd338641e5/cert-manager-controller/0.log" Jan 21 00:29:03 crc kubenswrapper[5118]: I0121 00:29:03.644604 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7dbf76d5c8-7cfzv_54dd7013-936e-44cb-92df-0e4ed02dd3ba/cert-manager-cainjector/0.log" Jan 21 00:29:03 crc kubenswrapper[5118]: I0121 00:29:03.655657 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-7894b5b9b4-q9qmg_a1ed4705-4ddb-4ee6-bce2-c8b90c8a459b/cert-manager-webhook/0.log" Jan 21 00:29:04 crc kubenswrapper[5118]: I0121 00:29:04.129414 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858d87f86b-rhl9w_afeab2f1-ad2a-4d1e-915e-6dbd338641e5/cert-manager-controller/0.log" Jan 21 00:29:04 crc kubenswrapper[5118]: I0121 00:29:04.140268 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7dbf76d5c8-7cfzv_54dd7013-936e-44cb-92df-0e4ed02dd3ba/cert-manager-cainjector/0.log" Jan 21 00:29:04 crc kubenswrapper[5118]: I0121 00:29:04.149926 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-7894b5b9b4-q9qmg_a1ed4705-4ddb-4ee6-bce2-c8b90c8a459b/cert-manager-webhook/0.log" Jan 21 00:29:04 crc kubenswrapper[5118]: I0121 00:29:04.584819 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-75ffdb6fcd-nkx6r_271e0654-9d86-4ec1-8c25-d345a8a1eb0a/control-plane-machine-set-operator/0.log" Jan 21 00:29:04 crc kubenswrapper[5118]: I0121 00:29:04.597140 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-755bb95488-r2gm9_1202d380-a207-455c-8bd8-2b82e7974afa/kube-rbac-proxy/0.log" Jan 21 00:29:04 crc kubenswrapper[5118]: I0121 00:29:04.604711 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-755bb95488-r2gm9_1202d380-a207-455c-8bd8-2b82e7974afa/machine-api-operator/0.log" Jan 21 00:29:05 crc kubenswrapper[5118]: I0121 00:29:05.093115 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad97666124k2v_856b1a14-e4ae-4518-a553-056f5d736bc8/extract/0.log" Jan 21 00:29:05 crc kubenswrapper[5118]: I0121 00:29:05.100698 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad97666124k2v_856b1a14-e4ae-4518-a553-056f5d736bc8/util/0.log" Jan 21 00:29:05 crc kubenswrapper[5118]: I0121 00:29:05.109412 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad97666124k2v_856b1a14-e4ae-4518-a553-056f5d736bc8/pull/0.log" Jan 21 00:29:05 crc kubenswrapper[5118]: I0121 00:29:05.116236 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572g2f7p_9d9a016c-6c95-45d3-83f9-297e6294957b/extract/0.log" Jan 21 00:29:05 crc kubenswrapper[5118]: I0121 00:29:05.122813 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572g2f7p_9d9a016c-6c95-45d3-83f9-297e6294957b/util/0.log" Jan 21 00:29:05 crc kubenswrapper[5118]: I0121 00:29:05.130831 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572g2f7p_9d9a016c-6c95-45d3-83f9-297e6294957b/pull/0.log" Jan 21 00:29:05 crc kubenswrapper[5118]: I0121 00:29:05.146410 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_alertmanager-default-0_7ce5576f-ce16-4a13-9cca-8fcc68e399e7/alertmanager/0.log" Jan 21 00:29:05 crc kubenswrapper[5118]: I0121 00:29:05.155727 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_alertmanager-default-0_7ce5576f-ce16-4a13-9cca-8fcc68e399e7/config-reloader/0.log" Jan 21 00:29:05 crc kubenswrapper[5118]: I0121 00:29:05.162850 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_alertmanager-default-0_7ce5576f-ce16-4a13-9cca-8fcc68e399e7/oauth-proxy/0.log" Jan 21 00:29:05 crc kubenswrapper[5118]: I0121 00:29:05.170496 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_alertmanager-default-0_7ce5576f-ce16-4a13-9cca-8fcc68e399e7/init-config-reloader/0.log" Jan 21 00:29:05 crc kubenswrapper[5118]: I0121 00:29:05.181900 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_awatch-operators-service-telemetry-operator-bundle-nightly-head_e5695a91-ba6f-481a-8978-9b2cf485424e/registry-grpc/0.log" Jan 21 00:29:05 crc kubenswrapper[5118]: I0121 00:29:05.193531 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_awatch-operators-service-telemetry-operator-bundle-nightly-head_e5695a91-ba6f-481a-8978-9b2cf485424e/registry-grpc-init/0.log" Jan 21 00:29:05 crc kubenswrapper[5118]: I0121 00:29:05.208754 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_curl_1849c1bc-0d2d-4bb2-9a70-5d67a614c157/curl/0.log" Jan 21 00:29:05 crc kubenswrapper[5118]: I0121 00:29:05.217617 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-98d47fbd5-w8mnl_1eae1c63-1b25-4f53-83e6-cc1fcd7cf325/bridge/1.log" Jan 21 00:29:05 crc kubenswrapper[5118]: I0121 00:29:05.218629 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-98d47fbd5-w8mnl_1eae1c63-1b25-4f53-83e6-cc1fcd7cf325/bridge/0.log" Jan 21 00:29:05 crc kubenswrapper[5118]: I0121 00:29:05.223067 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-98d47fbd5-w8mnl_1eae1c63-1b25-4f53-83e6-cc1fcd7cf325/sg-core/0.log" Jan 21 00:29:05 crc kubenswrapper[5118]: I0121 00:29:05.236084 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-545b564d9f-tp7hs_26e412ff-6455-444c-a91c-350651a82800/oauth-proxy/0.log" Jan 21 00:29:05 crc kubenswrapper[5118]: I0121 00:29:05.242899 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-545b564d9f-tp7hs_26e412ff-6455-444c-a91c-350651a82800/bridge/1.log" Jan 21 00:29:05 crc kubenswrapper[5118]: I0121 00:29:05.243558 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-545b564d9f-tp7hs_26e412ff-6455-444c-a91c-350651a82800/bridge/0.log" Jan 21 00:29:05 crc kubenswrapper[5118]: I0121 00:29:05.247741 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-545b564d9f-tp7hs_26e412ff-6455-444c-a91c-350651a82800/sg-core/0.log" Jan 21 00:29:05 crc kubenswrapper[5118]: I0121 00:29:05.259580 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-7f9f666f58-z4s4h_acab55a4-1334-4e6a-9160-0693a38de48d/bridge/0.log" Jan 21 00:29:05 crc kubenswrapper[5118]: I0121 00:29:05.260122 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-7f9f666f58-z4s4h_acab55a4-1334-4e6a-9160-0693a38de48d/bridge/1.log" Jan 21 00:29:05 crc kubenswrapper[5118]: I0121 00:29:05.264713 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-7f9f666f58-z4s4h_acab55a4-1334-4e6a-9160-0693a38de48d/sg-core/0.log" Jan 21 00:29:05 crc kubenswrapper[5118]: I0121 00:29:05.275693 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-787645d794-879kb_24b1ba21-7cfb-4bdc-84e7-63e5bacee435/oauth-proxy/0.log" Jan 21 00:29:05 crc kubenswrapper[5118]: I0121 00:29:05.282799 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-787645d794-879kb_24b1ba21-7cfb-4bdc-84e7-63e5bacee435/bridge/0.log" Jan 21 00:29:05 crc kubenswrapper[5118]: I0121 00:29:05.283128 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-787645d794-879kb_24b1ba21-7cfb-4bdc-84e7-63e5bacee435/bridge/1.log" Jan 21 00:29:05 crc kubenswrapper[5118]: I0121 00:29:05.287141 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-787645d794-879kb_24b1ba21-7cfb-4bdc-84e7-63e5bacee435/sg-core/0.log" Jan 21 00:29:05 crc kubenswrapper[5118]: I0121 00:29:05.304448 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pjrqv_d85a2d17-ce06-446d-aabc-cc02486c78eb/oauth-proxy/0.log" Jan 21 00:29:05 crc kubenswrapper[5118]: I0121 00:29:05.311066 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pjrqv_d85a2d17-ce06-446d-aabc-cc02486c78eb/bridge/1.log" Jan 21 00:29:05 crc kubenswrapper[5118]: I0121 00:29:05.312180 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pjrqv_d85a2d17-ce06-446d-aabc-cc02486c78eb/bridge/0.log" Jan 21 00:29:05 crc kubenswrapper[5118]: I0121 00:29:05.318298 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pjrqv_d85a2d17-ce06-446d-aabc-cc02486c78eb/sg-core/0.log" Jan 21 00:29:05 crc kubenswrapper[5118]: I0121 00:29:05.340858 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-interconnect-55bf8d5cb-xprn5_a46b6cc9-3c89-49ba-8687-5e3eecdee283/default-interconnect/0.log" Jan 21 00:29:05 crc kubenswrapper[5118]: I0121 00:29:05.347922 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-6774d8dfbc-t9sb4_7209940f-514b-4189-beeb-fbd77f7e6a15/prometheus-webhook-snmp/0.log" Jan 21 00:29:05 crc kubenswrapper[5118]: I0121 00:29:05.381714 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_elastic-operator-85b59756dc-hxvxk_cd6cbfc5-cdc4-4142-956b-c2f60a030179/manager/0.log" Jan 21 00:29:05 crc kubenswrapper[5118]: I0121 00:29:05.390023 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qcqwq_7c0390f5-26b4-4299-958c-acac058be619/kube-multus/0.log" Jan 21 00:29:05 crc kubenswrapper[5118]: I0121 00:29:05.390054 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qcqwq_7c0390f5-26b4-4299-958c-acac058be619/kube-multus/0.log" Jan 21 00:29:05 crc kubenswrapper[5118]: I0121 00:29:05.404629 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_elasticsearch-es-default-0_3a83d7d0-ad82-4da4-8c10-269310b2e144/elasticsearch/0.log" Jan 21 00:29:05 crc kubenswrapper[5118]: I0121 00:29:05.405426 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 21 00:29:05 crc kubenswrapper[5118]: I0121 00:29:05.405472 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 21 00:29:05 crc kubenswrapper[5118]: I0121 00:29:05.414478 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_elasticsearch-es-default-0_3a83d7d0-ad82-4da4-8c10-269310b2e144/elastic-internal-init-filesystem/0.log" Jan 21 00:29:05 crc kubenswrapper[5118]: I0121 00:29:05.419122 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_elasticsearch-es-default-0_3a83d7d0-ad82-4da4-8c10-269310b2e144/elastic-internal-suspend/0.log" Jan 21 00:29:05 crc kubenswrapper[5118]: I0121 00:29:05.428770 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_infrawatch-operators-smart-gateway-operator-bundle-nightly-head_e352d8be-c25b-4892-b368-9816c38e151c/registry-grpc/0.log" Jan 21 00:29:05 crc kubenswrapper[5118]: I0121 00:29:05.432967 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_infrawatch-operators-smart-gateway-operator-bundle-nightly-head_e352d8be-c25b-4892-b368-9816c38e151c/registry-grpc-init/0.log" Jan 21 00:29:05 crc kubenswrapper[5118]: I0121 00:29:05.443790 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_interconnect-operator-78b9bd8798-9tpjv_2f613dd9-bed2-40a6-aabc-5fa37c0dbbb2/interconnect-operator/0.log" Jan 21 00:29:05 crc kubenswrapper[5118]: I0121 00:29:05.466792 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-default-0_ee3806b6-31c0-470c-8dcf-9f7e40a5929a/prometheus/0.log" Jan 21 00:29:05 crc kubenswrapper[5118]: I0121 00:29:05.472120 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-default-0_ee3806b6-31c0-470c-8dcf-9f7e40a5929a/config-reloader/0.log" Jan 21 00:29:05 crc kubenswrapper[5118]: I0121 00:29:05.480123 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-default-0_ee3806b6-31c0-470c-8dcf-9f7e40a5929a/oauth-proxy/0.log" Jan 21 00:29:05 crc kubenswrapper[5118]: I0121 00:29:05.488107 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-default-0_ee3806b6-31c0-470c-8dcf-9f7e40a5929a/init-config-reloader/0.log" Jan 21 00:29:05 crc kubenswrapper[5118]: I0121 00:29:05.501195 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_qdr-test_b3c73688-0f77-4726-85bc-6a81cf3ff214/qdr/0.log" Jan 21 00:29:05 crc kubenswrapper[5118]: I0121 00:29:05.673224 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-794b5697c7-z99rt_a691b713-173e-4931-85e9-1510e1a0ee6a/operator/0.log" Jan 21 00:29:06 crc kubenswrapper[5118]: I0121 00:29:06.557224 5118 scope.go:117] "RemoveContainer" containerID="a1a8ac72acddc229e2e3eee4429a00ce12772e2fb7c232afa4a91ad745b8570d" Jan 21 00:29:07 crc kubenswrapper[5118]: I0121 00:29:07.298139 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-97b85656c-nzxxg_83839c7f-0d0d-41f1-83bf-77a677ceb327/operator/0.log" Jan 21 00:29:07 crc kubenswrapper[5118]: I0121 00:29:07.317176 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-lkxrn_5b6b7ed3-089a-4de8-a5b2-894a2ef33492/smoketest-collectd/0.log" Jan 21 00:29:07 crc kubenswrapper[5118]: I0121 00:29:07.323574 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-lkxrn_5b6b7ed3-089a-4de8-a5b2-894a2ef33492/smoketest-ceilometer/0.log" Jan 21 00:29:08 crc kubenswrapper[5118]: I0121 00:29:08.582894 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-d4lsz_0541bb33-5d4a-4ef9-964c-884c727499f6/kube-multus-additional-cni-plugins/0.log" Jan 21 00:29:08 crc kubenswrapper[5118]: I0121 00:29:08.592296 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-d4lsz_0541bb33-5d4a-4ef9-964c-884c727499f6/egress-router-binary-copy/0.log" Jan 21 00:29:08 crc kubenswrapper[5118]: I0121 00:29:08.601092 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-d4lsz_0541bb33-5d4a-4ef9-964c-884c727499f6/cni-plugins/0.log" Jan 21 00:29:08 crc kubenswrapper[5118]: I0121 00:29:08.609531 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-d4lsz_0541bb33-5d4a-4ef9-964c-884c727499f6/bond-cni-plugin/0.log" Jan 21 00:29:08 crc kubenswrapper[5118]: I0121 00:29:08.616370 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-d4lsz_0541bb33-5d4a-4ef9-964c-884c727499f6/routeoverride-cni/0.log" Jan 21 00:29:08 crc kubenswrapper[5118]: I0121 00:29:08.623224 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-d4lsz_0541bb33-5d4a-4ef9-964c-884c727499f6/whereabouts-cni-bincopy/0.log" Jan 21 00:29:08 crc kubenswrapper[5118]: I0121 00:29:08.630310 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-d4lsz_0541bb33-5d4a-4ef9-964c-884c727499f6/whereabouts-cni/0.log" Jan 21 00:29:08 crc kubenswrapper[5118]: I0121 00:29:08.641409 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-69db94689b-wkjhb_485b5bf0-70af-4e4a-b766-d9e63a94395f/multus-admission-controller/0.log" Jan 21 00:29:08 crc kubenswrapper[5118]: I0121 00:29:08.649682 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-69db94689b-wkjhb_485b5bf0-70af-4e4a-b766-d9e63a94395f/kube-rbac-proxy/0.log" Jan 21 00:29:08 crc kubenswrapper[5118]: I0121 00:29:08.688683 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qcqwq_7c0390f5-26b4-4299-958c-acac058be619/kube-multus/1.log" Jan 21 00:29:08 crc kubenswrapper[5118]: I0121 00:29:08.700963 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qcqwq_7c0390f5-26b4-4299-958c-acac058be619/kube-multus/0.log" Jan 21 00:29:08 crc kubenswrapper[5118]: I0121 00:29:08.719662 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-9hvtf_21105fbf-0225-4ba6-ba90-17808d5250c6/network-metrics-daemon/0.log" Jan 21 00:29:08 crc kubenswrapper[5118]: I0121 00:29:08.724007 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-9hvtf_21105fbf-0225-4ba6-ba90-17808d5250c6/kube-rbac-proxy/0.log" Jan 21 00:30:00 crc kubenswrapper[5118]: I0121 00:30:00.159452 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482590-t9csc"] Jan 21 00:30:00 crc kubenswrapper[5118]: I0121 00:30:00.179961 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29482590-qkpdj"] Jan 21 00:30:00 crc kubenswrapper[5118]: I0121 00:30:00.180112 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482590-t9csc" Jan 21 00:30:00 crc kubenswrapper[5118]: I0121 00:30:00.183821 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Jan 21 00:30:00 crc kubenswrapper[5118]: I0121 00:30:00.184540 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29482590-qkpdj"] Jan 21 00:30:00 crc kubenswrapper[5118]: I0121 00:30:00.184657 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482590-t9csc"] Jan 21 00:30:00 crc kubenswrapper[5118]: I0121 00:30:00.184722 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Jan 21 00:30:00 crc kubenswrapper[5118]: I0121 00:30:00.184847 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482590-qkpdj" Jan 21 00:30:00 crc kubenswrapper[5118]: I0121 00:30:00.187708 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-tk7qc\"" Jan 21 00:30:00 crc kubenswrapper[5118]: I0121 00:30:00.187753 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 00:30:00 crc kubenswrapper[5118]: I0121 00:30:00.193969 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 00:30:00 crc kubenswrapper[5118]: I0121 00:30:00.270672 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/836d4f45-c1f3-4635-9c02-f55956575928-secret-volume\") pod \"collect-profiles-29482590-t9csc\" (UID: \"836d4f45-c1f3-4635-9c02-f55956575928\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482590-t9csc" Jan 21 00:30:00 crc kubenswrapper[5118]: I0121 00:30:00.271081 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bpkr\" (UniqueName: \"kubernetes.io/projected/836d4f45-c1f3-4635-9c02-f55956575928-kube-api-access-7bpkr\") pod \"collect-profiles-29482590-t9csc\" (UID: \"836d4f45-c1f3-4635-9c02-f55956575928\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482590-t9csc" Jan 21 00:30:00 crc kubenswrapper[5118]: I0121 00:30:00.271254 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/836d4f45-c1f3-4635-9c02-f55956575928-config-volume\") pod \"collect-profiles-29482590-t9csc\" (UID: \"836d4f45-c1f3-4635-9c02-f55956575928\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482590-t9csc" Jan 21 00:30:00 crc kubenswrapper[5118]: I0121 00:30:00.271393 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g284f\" (UniqueName: \"kubernetes.io/projected/3e37f2c5-9d60-427a-ab18-95fc2402125e-kube-api-access-g284f\") pod \"auto-csr-approver-29482590-qkpdj\" (UID: \"3e37f2c5-9d60-427a-ab18-95fc2402125e\") " pod="openshift-infra/auto-csr-approver-29482590-qkpdj" Jan 21 00:30:00 crc kubenswrapper[5118]: I0121 00:30:00.373241 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/836d4f45-c1f3-4635-9c02-f55956575928-secret-volume\") pod \"collect-profiles-29482590-t9csc\" (UID: \"836d4f45-c1f3-4635-9c02-f55956575928\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482590-t9csc" Jan 21 00:30:00 crc kubenswrapper[5118]: I0121 00:30:00.373286 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7bpkr\" (UniqueName: \"kubernetes.io/projected/836d4f45-c1f3-4635-9c02-f55956575928-kube-api-access-7bpkr\") pod \"collect-profiles-29482590-t9csc\" (UID: \"836d4f45-c1f3-4635-9c02-f55956575928\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482590-t9csc" Jan 21 00:30:00 crc kubenswrapper[5118]: I0121 00:30:00.373319 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/836d4f45-c1f3-4635-9c02-f55956575928-config-volume\") pod \"collect-profiles-29482590-t9csc\" (UID: \"836d4f45-c1f3-4635-9c02-f55956575928\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482590-t9csc" Jan 21 00:30:00 crc kubenswrapper[5118]: I0121 00:30:00.373354 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-g284f\" (UniqueName: \"kubernetes.io/projected/3e37f2c5-9d60-427a-ab18-95fc2402125e-kube-api-access-g284f\") pod \"auto-csr-approver-29482590-qkpdj\" (UID: \"3e37f2c5-9d60-427a-ab18-95fc2402125e\") " pod="openshift-infra/auto-csr-approver-29482590-qkpdj" Jan 21 00:30:00 crc kubenswrapper[5118]: I0121 00:30:00.374824 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/836d4f45-c1f3-4635-9c02-f55956575928-config-volume\") pod \"collect-profiles-29482590-t9csc\" (UID: \"836d4f45-c1f3-4635-9c02-f55956575928\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482590-t9csc" Jan 21 00:30:00 crc kubenswrapper[5118]: I0121 00:30:00.392009 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-g284f\" (UniqueName: \"kubernetes.io/projected/3e37f2c5-9d60-427a-ab18-95fc2402125e-kube-api-access-g284f\") pod \"auto-csr-approver-29482590-qkpdj\" (UID: \"3e37f2c5-9d60-427a-ab18-95fc2402125e\") " pod="openshift-infra/auto-csr-approver-29482590-qkpdj" Jan 21 00:30:00 crc kubenswrapper[5118]: I0121 00:30:00.397115 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/836d4f45-c1f3-4635-9c02-f55956575928-secret-volume\") pod \"collect-profiles-29482590-t9csc\" (UID: \"836d4f45-c1f3-4635-9c02-f55956575928\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482590-t9csc" Jan 21 00:30:00 crc kubenswrapper[5118]: I0121 00:30:00.400006 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bpkr\" (UniqueName: \"kubernetes.io/projected/836d4f45-c1f3-4635-9c02-f55956575928-kube-api-access-7bpkr\") pod \"collect-profiles-29482590-t9csc\" (UID: \"836d4f45-c1f3-4635-9c02-f55956575928\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482590-t9csc" Jan 21 00:30:00 crc kubenswrapper[5118]: I0121 00:30:00.501056 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482590-t9csc" Jan 21 00:30:00 crc kubenswrapper[5118]: I0121 00:30:00.510475 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482590-qkpdj" Jan 21 00:30:00 crc kubenswrapper[5118]: I0121 00:30:00.934525 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29482590-qkpdj"] Jan 21 00:30:00 crc kubenswrapper[5118]: I0121 00:30:00.950928 5118 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 00:30:00 crc kubenswrapper[5118]: I0121 00:30:00.992301 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482590-t9csc"] Jan 21 00:30:01 crc kubenswrapper[5118]: I0121 00:30:01.671384 5118 generic.go:358] "Generic (PLEG): container finished" podID="836d4f45-c1f3-4635-9c02-f55956575928" containerID="3e69d4d6f0351daad87790a2aff440c0d3703276941fa3dc3358801080a029d1" exitCode=0 Jan 21 00:30:01 crc kubenswrapper[5118]: I0121 00:30:01.671548 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482590-t9csc" event={"ID":"836d4f45-c1f3-4635-9c02-f55956575928","Type":"ContainerDied","Data":"3e69d4d6f0351daad87790a2aff440c0d3703276941fa3dc3358801080a029d1"} Jan 21 00:30:01 crc kubenswrapper[5118]: I0121 00:30:01.671757 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482590-t9csc" event={"ID":"836d4f45-c1f3-4635-9c02-f55956575928","Type":"ContainerStarted","Data":"3ba2a10dc5435b0d1615cdd92c084d1f902dbd1c158030dc0341a6af85417961"} Jan 21 00:30:01 crc kubenswrapper[5118]: I0121 00:30:01.673400 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29482590-qkpdj" event={"ID":"3e37f2c5-9d60-427a-ab18-95fc2402125e","Type":"ContainerStarted","Data":"97bb0cdd610beb1b33ead2ece0b61eaaf92b22579d53751f7afd8501e02b57e8"} Jan 21 00:30:02 crc kubenswrapper[5118]: I0121 00:30:02.680534 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29482590-qkpdj" event={"ID":"3e37f2c5-9d60-427a-ab18-95fc2402125e","Type":"ContainerStarted","Data":"6567a89a2367a46ab640574581a4794dcd550e5ac9b7ef4342e375727537e202"} Jan 21 00:30:02 crc kubenswrapper[5118]: I0121 00:30:02.697538 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29482590-qkpdj" podStartSLOduration=1.463534654 podStartE2EDuration="2.697524425s" podCreationTimestamp="2026-01-21 00:30:00 +0000 UTC" firstStartedPulling="2026-01-21 00:30:00.951153982 +0000 UTC m=+1256.275401000" lastFinishedPulling="2026-01-21 00:30:02.185143743 +0000 UTC m=+1257.509390771" observedRunningTime="2026-01-21 00:30:02.693726934 +0000 UTC m=+1258.017973952" watchObservedRunningTime="2026-01-21 00:30:02.697524425 +0000 UTC m=+1258.021771443" Jan 21 00:30:02 crc kubenswrapper[5118]: I0121 00:30:02.929842 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482590-t9csc" Jan 21 00:30:03 crc kubenswrapper[5118]: I0121 00:30:03.124990 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/836d4f45-c1f3-4635-9c02-f55956575928-config-volume\") pod \"836d4f45-c1f3-4635-9c02-f55956575928\" (UID: \"836d4f45-c1f3-4635-9c02-f55956575928\") " Jan 21 00:30:03 crc kubenswrapper[5118]: I0121 00:30:03.125042 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7bpkr\" (UniqueName: \"kubernetes.io/projected/836d4f45-c1f3-4635-9c02-f55956575928-kube-api-access-7bpkr\") pod \"836d4f45-c1f3-4635-9c02-f55956575928\" (UID: \"836d4f45-c1f3-4635-9c02-f55956575928\") " Jan 21 00:30:03 crc kubenswrapper[5118]: I0121 00:30:03.125152 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/836d4f45-c1f3-4635-9c02-f55956575928-secret-volume\") pod \"836d4f45-c1f3-4635-9c02-f55956575928\" (UID: \"836d4f45-c1f3-4635-9c02-f55956575928\") " Jan 21 00:30:03 crc kubenswrapper[5118]: I0121 00:30:03.125892 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/836d4f45-c1f3-4635-9c02-f55956575928-config-volume" (OuterVolumeSpecName: "config-volume") pod "836d4f45-c1f3-4635-9c02-f55956575928" (UID: "836d4f45-c1f3-4635-9c02-f55956575928"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 00:30:03 crc kubenswrapper[5118]: I0121 00:30:03.133319 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/836d4f45-c1f3-4635-9c02-f55956575928-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "836d4f45-c1f3-4635-9c02-f55956575928" (UID: "836d4f45-c1f3-4635-9c02-f55956575928"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 00:30:03 crc kubenswrapper[5118]: I0121 00:30:03.133877 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/836d4f45-c1f3-4635-9c02-f55956575928-kube-api-access-7bpkr" (OuterVolumeSpecName: "kube-api-access-7bpkr") pod "836d4f45-c1f3-4635-9c02-f55956575928" (UID: "836d4f45-c1f3-4635-9c02-f55956575928"). InnerVolumeSpecName "kube-api-access-7bpkr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:30:03 crc kubenswrapper[5118]: I0121 00:30:03.227261 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7bpkr\" (UniqueName: \"kubernetes.io/projected/836d4f45-c1f3-4635-9c02-f55956575928-kube-api-access-7bpkr\") on node \"crc\" DevicePath \"\"" Jan 21 00:30:03 crc kubenswrapper[5118]: I0121 00:30:03.227305 5118 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/836d4f45-c1f3-4635-9c02-f55956575928-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 00:30:03 crc kubenswrapper[5118]: I0121 00:30:03.227317 5118 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/836d4f45-c1f3-4635-9c02-f55956575928-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 00:30:03 crc kubenswrapper[5118]: I0121 00:30:03.689950 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482590-t9csc" event={"ID":"836d4f45-c1f3-4635-9c02-f55956575928","Type":"ContainerDied","Data":"3ba2a10dc5435b0d1615cdd92c084d1f902dbd1c158030dc0341a6af85417961"} Jan 21 00:30:03 crc kubenswrapper[5118]: I0121 00:30:03.690341 5118 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ba2a10dc5435b0d1615cdd92c084d1f902dbd1c158030dc0341a6af85417961" Jan 21 00:30:03 crc kubenswrapper[5118]: I0121 00:30:03.690433 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482590-t9csc" Jan 21 00:30:03 crc kubenswrapper[5118]: I0121 00:30:03.695346 5118 generic.go:358] "Generic (PLEG): container finished" podID="3e37f2c5-9d60-427a-ab18-95fc2402125e" containerID="6567a89a2367a46ab640574581a4794dcd550e5ac9b7ef4342e375727537e202" exitCode=0 Jan 21 00:30:03 crc kubenswrapper[5118]: I0121 00:30:03.695472 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29482590-qkpdj" event={"ID":"3e37f2c5-9d60-427a-ab18-95fc2402125e","Type":"ContainerDied","Data":"6567a89a2367a46ab640574581a4794dcd550e5ac9b7ef4342e375727537e202"} Jan 21 00:30:04 crc kubenswrapper[5118]: I0121 00:30:04.980625 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482590-qkpdj" Jan 21 00:30:05 crc kubenswrapper[5118]: I0121 00:30:05.053699 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g284f\" (UniqueName: \"kubernetes.io/projected/3e37f2c5-9d60-427a-ab18-95fc2402125e-kube-api-access-g284f\") pod \"3e37f2c5-9d60-427a-ab18-95fc2402125e\" (UID: \"3e37f2c5-9d60-427a-ab18-95fc2402125e\") " Jan 21 00:30:05 crc kubenswrapper[5118]: I0121 00:30:05.060583 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e37f2c5-9d60-427a-ab18-95fc2402125e-kube-api-access-g284f" (OuterVolumeSpecName: "kube-api-access-g284f") pod "3e37f2c5-9d60-427a-ab18-95fc2402125e" (UID: "3e37f2c5-9d60-427a-ab18-95fc2402125e"). InnerVolumeSpecName "kube-api-access-g284f". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:30:05 crc kubenswrapper[5118]: I0121 00:30:05.155036 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-g284f\" (UniqueName: \"kubernetes.io/projected/3e37f2c5-9d60-427a-ab18-95fc2402125e-kube-api-access-g284f\") on node \"crc\" DevicePath \"\"" Jan 21 00:30:05 crc kubenswrapper[5118]: I0121 00:30:05.710429 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482590-qkpdj" Jan 21 00:30:05 crc kubenswrapper[5118]: I0121 00:30:05.710476 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29482590-qkpdj" event={"ID":"3e37f2c5-9d60-427a-ab18-95fc2402125e","Type":"ContainerDied","Data":"97bb0cdd610beb1b33ead2ece0b61eaaf92b22579d53751f7afd8501e02b57e8"} Jan 21 00:30:05 crc kubenswrapper[5118]: I0121 00:30:05.710935 5118 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97bb0cdd610beb1b33ead2ece0b61eaaf92b22579d53751f7afd8501e02b57e8" Jan 21 00:30:05 crc kubenswrapper[5118]: I0121 00:30:05.763142 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29482584-twbhm"] Jan 21 00:30:05 crc kubenswrapper[5118]: I0121 00:30:05.767274 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29482584-twbhm"] Jan 21 00:30:06 crc kubenswrapper[5118]: I0121 00:30:06.985076 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f01e679-377c-4acc-9906-869bdf589782" path="/var/lib/kubelet/pods/1f01e679-377c-4acc-9906-869bdf589782/volumes" Jan 21 00:30:33 crc kubenswrapper[5118]: I0121 00:30:33.801515 5118 patch_prober.go:28] interesting pod/machine-config-daemon-22r9n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 00:30:33 crc kubenswrapper[5118]: I0121 00:30:33.802143 5118 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 00:31:03 crc kubenswrapper[5118]: I0121 00:31:03.801359 5118 patch_prober.go:28] interesting pod/machine-config-daemon-22r9n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 00:31:03 crc kubenswrapper[5118]: I0121 00:31:03.801787 5118 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 00:31:06 crc kubenswrapper[5118]: I0121 00:31:06.706225 5118 scope.go:117] "RemoveContainer" containerID="3ad4df8dea1459a8a03042a5fb0b2c493ab533db48b819f887d0b8cf4c6193cf" Jan 21 00:31:32 crc kubenswrapper[5118]: I0121 00:31:32.491524 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-g5z5f"] Jan 21 00:31:32 crc kubenswrapper[5118]: I0121 00:31:32.492947 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3e37f2c5-9d60-427a-ab18-95fc2402125e" containerName="oc" Jan 21 00:31:32 crc kubenswrapper[5118]: I0121 00:31:32.492961 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e37f2c5-9d60-427a-ab18-95fc2402125e" containerName="oc" Jan 21 00:31:32 crc kubenswrapper[5118]: I0121 00:31:32.492976 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="836d4f45-c1f3-4635-9c02-f55956575928" containerName="collect-profiles" Jan 21 00:31:32 crc kubenswrapper[5118]: I0121 00:31:32.492981 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="836d4f45-c1f3-4635-9c02-f55956575928" containerName="collect-profiles" Jan 21 00:31:32 crc kubenswrapper[5118]: I0121 00:31:32.493106 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="3e37f2c5-9d60-427a-ab18-95fc2402125e" containerName="oc" Jan 21 00:31:32 crc kubenswrapper[5118]: I0121 00:31:32.493122 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="836d4f45-c1f3-4635-9c02-f55956575928" containerName="collect-profiles" Jan 21 00:31:32 crc kubenswrapper[5118]: I0121 00:31:32.506303 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g5z5f"] Jan 21 00:31:32 crc kubenswrapper[5118]: I0121 00:31:32.506431 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g5z5f" Jan 21 00:31:32 crc kubenswrapper[5118]: I0121 00:31:32.541266 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjrx4\" (UniqueName: \"kubernetes.io/projected/946f3e2d-1bb3-45bf-a245-a77badd0f20f-kube-api-access-hjrx4\") pod \"redhat-operators-g5z5f\" (UID: \"946f3e2d-1bb3-45bf-a245-a77badd0f20f\") " pod="openshift-marketplace/redhat-operators-g5z5f" Jan 21 00:31:32 crc kubenswrapper[5118]: I0121 00:31:32.541439 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/946f3e2d-1bb3-45bf-a245-a77badd0f20f-utilities\") pod \"redhat-operators-g5z5f\" (UID: \"946f3e2d-1bb3-45bf-a245-a77badd0f20f\") " pod="openshift-marketplace/redhat-operators-g5z5f" Jan 21 00:31:32 crc kubenswrapper[5118]: I0121 00:31:32.541537 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/946f3e2d-1bb3-45bf-a245-a77badd0f20f-catalog-content\") pod \"redhat-operators-g5z5f\" (UID: \"946f3e2d-1bb3-45bf-a245-a77badd0f20f\") " pod="openshift-marketplace/redhat-operators-g5z5f" Jan 21 00:31:32 crc kubenswrapper[5118]: I0121 00:31:32.643340 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hjrx4\" (UniqueName: \"kubernetes.io/projected/946f3e2d-1bb3-45bf-a245-a77badd0f20f-kube-api-access-hjrx4\") pod \"redhat-operators-g5z5f\" (UID: \"946f3e2d-1bb3-45bf-a245-a77badd0f20f\") " pod="openshift-marketplace/redhat-operators-g5z5f" Jan 21 00:31:32 crc kubenswrapper[5118]: I0121 00:31:32.643395 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/946f3e2d-1bb3-45bf-a245-a77badd0f20f-utilities\") pod \"redhat-operators-g5z5f\" (UID: \"946f3e2d-1bb3-45bf-a245-a77badd0f20f\") " pod="openshift-marketplace/redhat-operators-g5z5f" Jan 21 00:31:32 crc kubenswrapper[5118]: I0121 00:31:32.643428 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/946f3e2d-1bb3-45bf-a245-a77badd0f20f-catalog-content\") pod \"redhat-operators-g5z5f\" (UID: \"946f3e2d-1bb3-45bf-a245-a77badd0f20f\") " pod="openshift-marketplace/redhat-operators-g5z5f" Jan 21 00:31:32 crc kubenswrapper[5118]: I0121 00:31:32.643901 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/946f3e2d-1bb3-45bf-a245-a77badd0f20f-catalog-content\") pod \"redhat-operators-g5z5f\" (UID: \"946f3e2d-1bb3-45bf-a245-a77badd0f20f\") " pod="openshift-marketplace/redhat-operators-g5z5f" Jan 21 00:31:32 crc kubenswrapper[5118]: I0121 00:31:32.644055 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/946f3e2d-1bb3-45bf-a245-a77badd0f20f-utilities\") pod \"redhat-operators-g5z5f\" (UID: \"946f3e2d-1bb3-45bf-a245-a77badd0f20f\") " pod="openshift-marketplace/redhat-operators-g5z5f" Jan 21 00:31:32 crc kubenswrapper[5118]: I0121 00:31:32.669648 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjrx4\" (UniqueName: \"kubernetes.io/projected/946f3e2d-1bb3-45bf-a245-a77badd0f20f-kube-api-access-hjrx4\") pod \"redhat-operators-g5z5f\" (UID: \"946f3e2d-1bb3-45bf-a245-a77badd0f20f\") " pod="openshift-marketplace/redhat-operators-g5z5f" Jan 21 00:31:32 crc kubenswrapper[5118]: I0121 00:31:32.826095 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g5z5f" Jan 21 00:31:33 crc kubenswrapper[5118]: I0121 00:31:33.094844 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g5z5f"] Jan 21 00:31:33 crc kubenswrapper[5118]: I0121 00:31:33.743222 5118 generic.go:358] "Generic (PLEG): container finished" podID="946f3e2d-1bb3-45bf-a245-a77badd0f20f" containerID="2f55e13b9b3d2efedc61105118627e273958e4d3fe204edc910930de2e86aea0" exitCode=0 Jan 21 00:31:33 crc kubenswrapper[5118]: I0121 00:31:33.743287 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g5z5f" event={"ID":"946f3e2d-1bb3-45bf-a245-a77badd0f20f","Type":"ContainerDied","Data":"2f55e13b9b3d2efedc61105118627e273958e4d3fe204edc910930de2e86aea0"} Jan 21 00:31:33 crc kubenswrapper[5118]: I0121 00:31:33.743605 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g5z5f" event={"ID":"946f3e2d-1bb3-45bf-a245-a77badd0f20f","Type":"ContainerStarted","Data":"b21515a5904dec133534e84f247c0acaedc2c17a87c973b35506704262f13b0f"} Jan 21 00:31:33 crc kubenswrapper[5118]: I0121 00:31:33.801019 5118 patch_prober.go:28] interesting pod/machine-config-daemon-22r9n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 00:31:33 crc kubenswrapper[5118]: I0121 00:31:33.801102 5118 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 00:31:33 crc kubenswrapper[5118]: I0121 00:31:33.801190 5118 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" Jan 21 00:31:33 crc kubenswrapper[5118]: I0121 00:31:33.802265 5118 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"197e7e77ca3ea92693e0cbb821721a50117819efe533cb1e47ad37d07b7e056e"} pod="openshift-machine-config-operator/machine-config-daemon-22r9n" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 00:31:33 crc kubenswrapper[5118]: I0121 00:31:33.802367 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" containerName="machine-config-daemon" containerID="cri-o://197e7e77ca3ea92693e0cbb821721a50117819efe533cb1e47ad37d07b7e056e" gracePeriod=600 Jan 21 00:31:34 crc kubenswrapper[5118]: I0121 00:31:34.753929 5118 generic.go:358] "Generic (PLEG): container finished" podID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" containerID="197e7e77ca3ea92693e0cbb821721a50117819efe533cb1e47ad37d07b7e056e" exitCode=0 Jan 21 00:31:34 crc kubenswrapper[5118]: I0121 00:31:34.754013 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" event={"ID":"44eb9bc7-60a3-421c-bf5e-d1d9a5026435","Type":"ContainerDied","Data":"197e7e77ca3ea92693e0cbb821721a50117819efe533cb1e47ad37d07b7e056e"} Jan 21 00:31:34 crc kubenswrapper[5118]: I0121 00:31:34.755637 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" event={"ID":"44eb9bc7-60a3-421c-bf5e-d1d9a5026435","Type":"ContainerStarted","Data":"3f34d1c6794faa2161633d81afe88f4f588963943ab90e805f7ce146e66ccc06"} Jan 21 00:31:34 crc kubenswrapper[5118]: I0121 00:31:34.755674 5118 scope.go:117] "RemoveContainer" containerID="61c6c2137480cc175302c2e82d6bbb9c15151d1ca8b8cb9acea1f49282d3488a" Jan 21 00:31:35 crc kubenswrapper[5118]: I0121 00:31:35.764066 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g5z5f" event={"ID":"946f3e2d-1bb3-45bf-a245-a77badd0f20f","Type":"ContainerStarted","Data":"ddc73ea4213934997d1cc6ab03c67e0da0499d2fc8ee99e37c6f99d6d951dca0"} Jan 21 00:31:36 crc kubenswrapper[5118]: I0121 00:31:36.776741 5118 generic.go:358] "Generic (PLEG): container finished" podID="946f3e2d-1bb3-45bf-a245-a77badd0f20f" containerID="ddc73ea4213934997d1cc6ab03c67e0da0499d2fc8ee99e37c6f99d6d951dca0" exitCode=0 Jan 21 00:31:36 crc kubenswrapper[5118]: I0121 00:31:36.776857 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g5z5f" event={"ID":"946f3e2d-1bb3-45bf-a245-a77badd0f20f","Type":"ContainerDied","Data":"ddc73ea4213934997d1cc6ab03c67e0da0499d2fc8ee99e37c6f99d6d951dca0"} Jan 21 00:31:37 crc kubenswrapper[5118]: I0121 00:31:37.786201 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g5z5f" event={"ID":"946f3e2d-1bb3-45bf-a245-a77badd0f20f","Type":"ContainerStarted","Data":"0255c10b7a8f23dbe54f1a4f0c402e22bbd15d646465764345aa0bce08eea88d"} Jan 21 00:31:37 crc kubenswrapper[5118]: I0121 00:31:37.812255 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-g5z5f" podStartSLOduration=4.395974733 podStartE2EDuration="5.812233046s" podCreationTimestamp="2026-01-21 00:31:32 +0000 UTC" firstStartedPulling="2026-01-21 00:31:33.744642827 +0000 UTC m=+1349.068889875" lastFinishedPulling="2026-01-21 00:31:35.16090117 +0000 UTC m=+1350.485148188" observedRunningTime="2026-01-21 00:31:37.805519337 +0000 UTC m=+1353.129766355" watchObservedRunningTime="2026-01-21 00:31:37.812233046 +0000 UTC m=+1353.136480074" Jan 21 00:31:42 crc kubenswrapper[5118]: I0121 00:31:42.826305 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-g5z5f" Jan 21 00:31:42 crc kubenswrapper[5118]: I0121 00:31:42.827516 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-g5z5f" Jan 21 00:31:42 crc kubenswrapper[5118]: I0121 00:31:42.874365 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-g5z5f" Jan 21 00:31:43 crc kubenswrapper[5118]: I0121 00:31:43.903649 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-g5z5f" Jan 21 00:31:46 crc kubenswrapper[5118]: I0121 00:31:46.277705 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-g5z5f"] Jan 21 00:31:46 crc kubenswrapper[5118]: I0121 00:31:46.278234 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-g5z5f" podUID="946f3e2d-1bb3-45bf-a245-a77badd0f20f" containerName="registry-server" containerID="cri-o://0255c10b7a8f23dbe54f1a4f0c402e22bbd15d646465764345aa0bce08eea88d" gracePeriod=2 Jan 21 00:31:47 crc kubenswrapper[5118]: I0121 00:31:47.874517 5118 generic.go:358] "Generic (PLEG): container finished" podID="946f3e2d-1bb3-45bf-a245-a77badd0f20f" containerID="0255c10b7a8f23dbe54f1a4f0c402e22bbd15d646465764345aa0bce08eea88d" exitCode=0 Jan 21 00:31:47 crc kubenswrapper[5118]: I0121 00:31:47.874708 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g5z5f" event={"ID":"946f3e2d-1bb3-45bf-a245-a77badd0f20f","Type":"ContainerDied","Data":"0255c10b7a8f23dbe54f1a4f0c402e22bbd15d646465764345aa0bce08eea88d"} Jan 21 00:31:47 crc kubenswrapper[5118]: I0121 00:31:47.875201 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g5z5f" event={"ID":"946f3e2d-1bb3-45bf-a245-a77badd0f20f","Type":"ContainerDied","Data":"b21515a5904dec133534e84f247c0acaedc2c17a87c973b35506704262f13b0f"} Jan 21 00:31:47 crc kubenswrapper[5118]: I0121 00:31:47.875218 5118 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b21515a5904dec133534e84f247c0acaedc2c17a87c973b35506704262f13b0f" Jan 21 00:31:47 crc kubenswrapper[5118]: I0121 00:31:47.903033 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g5z5f" Jan 21 00:31:48 crc kubenswrapper[5118]: I0121 00:31:48.002424 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hjrx4\" (UniqueName: \"kubernetes.io/projected/946f3e2d-1bb3-45bf-a245-a77badd0f20f-kube-api-access-hjrx4\") pod \"946f3e2d-1bb3-45bf-a245-a77badd0f20f\" (UID: \"946f3e2d-1bb3-45bf-a245-a77badd0f20f\") " Jan 21 00:31:48 crc kubenswrapper[5118]: I0121 00:31:48.002464 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/946f3e2d-1bb3-45bf-a245-a77badd0f20f-catalog-content\") pod \"946f3e2d-1bb3-45bf-a245-a77badd0f20f\" (UID: \"946f3e2d-1bb3-45bf-a245-a77badd0f20f\") " Jan 21 00:31:48 crc kubenswrapper[5118]: I0121 00:31:48.002558 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/946f3e2d-1bb3-45bf-a245-a77badd0f20f-utilities\") pod \"946f3e2d-1bb3-45bf-a245-a77badd0f20f\" (UID: \"946f3e2d-1bb3-45bf-a245-a77badd0f20f\") " Jan 21 00:31:48 crc kubenswrapper[5118]: I0121 00:31:48.005248 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/946f3e2d-1bb3-45bf-a245-a77badd0f20f-utilities" (OuterVolumeSpecName: "utilities") pod "946f3e2d-1bb3-45bf-a245-a77badd0f20f" (UID: "946f3e2d-1bb3-45bf-a245-a77badd0f20f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 00:31:48 crc kubenswrapper[5118]: I0121 00:31:48.026375 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/946f3e2d-1bb3-45bf-a245-a77badd0f20f-kube-api-access-hjrx4" (OuterVolumeSpecName: "kube-api-access-hjrx4") pod "946f3e2d-1bb3-45bf-a245-a77badd0f20f" (UID: "946f3e2d-1bb3-45bf-a245-a77badd0f20f"). InnerVolumeSpecName "kube-api-access-hjrx4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:31:48 crc kubenswrapper[5118]: I0121 00:31:48.104335 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hjrx4\" (UniqueName: \"kubernetes.io/projected/946f3e2d-1bb3-45bf-a245-a77badd0f20f-kube-api-access-hjrx4\") on node \"crc\" DevicePath \"\"" Jan 21 00:31:48 crc kubenswrapper[5118]: I0121 00:31:48.104670 5118 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/946f3e2d-1bb3-45bf-a245-a77badd0f20f-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 00:31:48 crc kubenswrapper[5118]: I0121 00:31:48.147591 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/946f3e2d-1bb3-45bf-a245-a77badd0f20f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "946f3e2d-1bb3-45bf-a245-a77badd0f20f" (UID: "946f3e2d-1bb3-45bf-a245-a77badd0f20f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 00:31:48 crc kubenswrapper[5118]: I0121 00:31:48.206555 5118 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/946f3e2d-1bb3-45bf-a245-a77badd0f20f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 00:31:48 crc kubenswrapper[5118]: I0121 00:31:48.883475 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g5z5f" Jan 21 00:31:48 crc kubenswrapper[5118]: I0121 00:31:48.925457 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-g5z5f"] Jan 21 00:31:48 crc kubenswrapper[5118]: I0121 00:31:48.935587 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-g5z5f"] Jan 21 00:31:48 crc kubenswrapper[5118]: I0121 00:31:48.986291 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="946f3e2d-1bb3-45bf-a245-a77badd0f20f" path="/var/lib/kubelet/pods/946f3e2d-1bb3-45bf-a245-a77badd0f20f/volumes" Jan 21 00:32:00 crc kubenswrapper[5118]: I0121 00:32:00.160905 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29482592-v5bm8"] Jan 21 00:32:00 crc kubenswrapper[5118]: I0121 00:32:00.162969 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="946f3e2d-1bb3-45bf-a245-a77badd0f20f" containerName="extract-utilities" Jan 21 00:32:00 crc kubenswrapper[5118]: I0121 00:32:00.163147 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="946f3e2d-1bb3-45bf-a245-a77badd0f20f" containerName="extract-utilities" Jan 21 00:32:00 crc kubenswrapper[5118]: I0121 00:32:00.163252 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="946f3e2d-1bb3-45bf-a245-a77badd0f20f" containerName="registry-server" Jan 21 00:32:00 crc kubenswrapper[5118]: I0121 00:32:00.163267 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="946f3e2d-1bb3-45bf-a245-a77badd0f20f" containerName="registry-server" Jan 21 00:32:00 crc kubenswrapper[5118]: I0121 00:32:00.163292 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="946f3e2d-1bb3-45bf-a245-a77badd0f20f" containerName="extract-content" Jan 21 00:32:00 crc kubenswrapper[5118]: I0121 00:32:00.163305 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="946f3e2d-1bb3-45bf-a245-a77badd0f20f" containerName="extract-content" Jan 21 00:32:00 crc kubenswrapper[5118]: I0121 00:32:00.163537 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="946f3e2d-1bb3-45bf-a245-a77badd0f20f" containerName="registry-server" Jan 21 00:32:00 crc kubenswrapper[5118]: I0121 00:32:00.172701 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482592-v5bm8" Jan 21 00:32:00 crc kubenswrapper[5118]: I0121 00:32:00.173978 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29482592-v5bm8"] Jan 21 00:32:00 crc kubenswrapper[5118]: I0121 00:32:00.174826 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-tk7qc\"" Jan 21 00:32:00 crc kubenswrapper[5118]: I0121 00:32:00.178290 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 00:32:00 crc kubenswrapper[5118]: I0121 00:32:00.178461 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 00:32:00 crc kubenswrapper[5118]: I0121 00:32:00.222505 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f992m\" (UniqueName: \"kubernetes.io/projected/837aae9d-be20-46f0-ba02-aa898205931a-kube-api-access-f992m\") pod \"auto-csr-approver-29482592-v5bm8\" (UID: \"837aae9d-be20-46f0-ba02-aa898205931a\") " pod="openshift-infra/auto-csr-approver-29482592-v5bm8" Jan 21 00:32:00 crc kubenswrapper[5118]: I0121 00:32:00.324398 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-f992m\" (UniqueName: \"kubernetes.io/projected/837aae9d-be20-46f0-ba02-aa898205931a-kube-api-access-f992m\") pod \"auto-csr-approver-29482592-v5bm8\" (UID: \"837aae9d-be20-46f0-ba02-aa898205931a\") " pod="openshift-infra/auto-csr-approver-29482592-v5bm8" Jan 21 00:32:00 crc kubenswrapper[5118]: I0121 00:32:00.347677 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-f992m\" (UniqueName: \"kubernetes.io/projected/837aae9d-be20-46f0-ba02-aa898205931a-kube-api-access-f992m\") pod \"auto-csr-approver-29482592-v5bm8\" (UID: \"837aae9d-be20-46f0-ba02-aa898205931a\") " pod="openshift-infra/auto-csr-approver-29482592-v5bm8" Jan 21 00:32:00 crc kubenswrapper[5118]: I0121 00:32:00.523615 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482592-v5bm8" Jan 21 00:32:00 crc kubenswrapper[5118]: I0121 00:32:00.787641 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29482592-v5bm8"] Jan 21 00:32:00 crc kubenswrapper[5118]: I0121 00:32:00.996115 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29482592-v5bm8" event={"ID":"837aae9d-be20-46f0-ba02-aa898205931a","Type":"ContainerStarted","Data":"1b7ecf60bb1794f0a1a26e35f542e816e1a302e367703ef4e3306bac8cafade4"} Jan 21 00:32:04 crc kubenswrapper[5118]: I0121 00:32:04.026138 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29482592-v5bm8" event={"ID":"837aae9d-be20-46f0-ba02-aa898205931a","Type":"ContainerStarted","Data":"d857a819e963c91b28887fa077c095874c4245a5d39a55d75c15dbad79a35bb9"} Jan 21 00:32:04 crc kubenswrapper[5118]: I0121 00:32:04.047210 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29482592-v5bm8" podStartSLOduration=1.425207077 podStartE2EDuration="4.047186748s" podCreationTimestamp="2026-01-21 00:32:00 +0000 UTC" firstStartedPulling="2026-01-21 00:32:00.799263197 +0000 UTC m=+1376.123510255" lastFinishedPulling="2026-01-21 00:32:03.421242898 +0000 UTC m=+1378.745489926" observedRunningTime="2026-01-21 00:32:04.046524591 +0000 UTC m=+1379.370771609" watchObservedRunningTime="2026-01-21 00:32:04.047186748 +0000 UTC m=+1379.371433786" Jan 21 00:32:05 crc kubenswrapper[5118]: I0121 00:32:05.036968 5118 generic.go:358] "Generic (PLEG): container finished" podID="837aae9d-be20-46f0-ba02-aa898205931a" containerID="d857a819e963c91b28887fa077c095874c4245a5d39a55d75c15dbad79a35bb9" exitCode=0 Jan 21 00:32:05 crc kubenswrapper[5118]: I0121 00:32:05.037231 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29482592-v5bm8" event={"ID":"837aae9d-be20-46f0-ba02-aa898205931a","Type":"ContainerDied","Data":"d857a819e963c91b28887fa077c095874c4245a5d39a55d75c15dbad79a35bb9"} Jan 21 00:32:06 crc kubenswrapper[5118]: I0121 00:32:06.300304 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482592-v5bm8" Jan 21 00:32:06 crc kubenswrapper[5118]: I0121 00:32:06.452471 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f992m\" (UniqueName: \"kubernetes.io/projected/837aae9d-be20-46f0-ba02-aa898205931a-kube-api-access-f992m\") pod \"837aae9d-be20-46f0-ba02-aa898205931a\" (UID: \"837aae9d-be20-46f0-ba02-aa898205931a\") " Jan 21 00:32:06 crc kubenswrapper[5118]: I0121 00:32:06.458309 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/837aae9d-be20-46f0-ba02-aa898205931a-kube-api-access-f992m" (OuterVolumeSpecName: "kube-api-access-f992m") pod "837aae9d-be20-46f0-ba02-aa898205931a" (UID: "837aae9d-be20-46f0-ba02-aa898205931a"). InnerVolumeSpecName "kube-api-access-f992m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:32:06 crc kubenswrapper[5118]: I0121 00:32:06.553964 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-f992m\" (UniqueName: \"kubernetes.io/projected/837aae9d-be20-46f0-ba02-aa898205931a-kube-api-access-f992m\") on node \"crc\" DevicePath \"\"" Jan 21 00:32:07 crc kubenswrapper[5118]: I0121 00:32:07.051570 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482592-v5bm8" Jan 21 00:32:07 crc kubenswrapper[5118]: I0121 00:32:07.051687 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29482592-v5bm8" event={"ID":"837aae9d-be20-46f0-ba02-aa898205931a","Type":"ContainerDied","Data":"1b7ecf60bb1794f0a1a26e35f542e816e1a302e367703ef4e3306bac8cafade4"} Jan 21 00:32:07 crc kubenswrapper[5118]: I0121 00:32:07.051723 5118 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b7ecf60bb1794f0a1a26e35f542e816e1a302e367703ef4e3306bac8cafade4" Jan 21 00:32:07 crc kubenswrapper[5118]: I0121 00:32:07.129055 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29482586-d784z"] Jan 21 00:32:07 crc kubenswrapper[5118]: I0121 00:32:07.135825 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29482586-d784z"] Jan 21 00:32:08 crc kubenswrapper[5118]: I0121 00:32:08.983383 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68ab75a7-408a-4c6c-b232-3b08fa01168f" path="/var/lib/kubelet/pods/68ab75a7-408a-4c6c-b232-3b08fa01168f/volumes" Jan 21 00:33:06 crc kubenswrapper[5118]: I0121 00:33:06.836087 5118 scope.go:117] "RemoveContainer" containerID="e7c14aa1a065f0bed30e2daf241d55f897716ea6e853a751286554e62ed26430" Jan 21 00:34:00 crc kubenswrapper[5118]: I0121 00:34:00.149738 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29482594-jwc5p"] Jan 21 00:34:00 crc kubenswrapper[5118]: I0121 00:34:00.152128 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="837aae9d-be20-46f0-ba02-aa898205931a" containerName="oc" Jan 21 00:34:00 crc kubenswrapper[5118]: I0121 00:34:00.152241 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="837aae9d-be20-46f0-ba02-aa898205931a" containerName="oc" Jan 21 00:34:00 crc kubenswrapper[5118]: I0121 00:34:00.152418 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="837aae9d-be20-46f0-ba02-aa898205931a" containerName="oc" Jan 21 00:34:00 crc kubenswrapper[5118]: I0121 00:34:00.156910 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482594-jwc5p" Jan 21 00:34:00 crc kubenswrapper[5118]: I0121 00:34:00.160055 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 00:34:00 crc kubenswrapper[5118]: I0121 00:34:00.160457 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 00:34:00 crc kubenswrapper[5118]: I0121 00:34:00.161274 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-tk7qc\"" Jan 21 00:34:00 crc kubenswrapper[5118]: I0121 00:34:00.166913 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29482594-jwc5p"] Jan 21 00:34:00 crc kubenswrapper[5118]: I0121 00:34:00.255944 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gtc4\" (UniqueName: \"kubernetes.io/projected/d5dfa3ff-c2df-44c2-8c3c-a5ba37929520-kube-api-access-4gtc4\") pod \"auto-csr-approver-29482594-jwc5p\" (UID: \"d5dfa3ff-c2df-44c2-8c3c-a5ba37929520\") " pod="openshift-infra/auto-csr-approver-29482594-jwc5p" Jan 21 00:34:00 crc kubenswrapper[5118]: I0121 00:34:00.356869 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4gtc4\" (UniqueName: \"kubernetes.io/projected/d5dfa3ff-c2df-44c2-8c3c-a5ba37929520-kube-api-access-4gtc4\") pod \"auto-csr-approver-29482594-jwc5p\" (UID: \"d5dfa3ff-c2df-44c2-8c3c-a5ba37929520\") " pod="openshift-infra/auto-csr-approver-29482594-jwc5p" Jan 21 00:34:00 crc kubenswrapper[5118]: I0121 00:34:00.377426 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gtc4\" (UniqueName: \"kubernetes.io/projected/d5dfa3ff-c2df-44c2-8c3c-a5ba37929520-kube-api-access-4gtc4\") pod \"auto-csr-approver-29482594-jwc5p\" (UID: \"d5dfa3ff-c2df-44c2-8c3c-a5ba37929520\") " pod="openshift-infra/auto-csr-approver-29482594-jwc5p" Jan 21 00:34:00 crc kubenswrapper[5118]: I0121 00:34:00.498486 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482594-jwc5p" Jan 21 00:34:01 crc kubenswrapper[5118]: I0121 00:34:01.007124 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29482594-jwc5p"] Jan 21 00:34:01 crc kubenswrapper[5118]: I0121 00:34:01.024544 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29482594-jwc5p" event={"ID":"d5dfa3ff-c2df-44c2-8c3c-a5ba37929520","Type":"ContainerStarted","Data":"3d235978c2c0aed6520bc11512bca477f09765e0583dd5e79c42b04b2f0af7d5"} Jan 21 00:34:03 crc kubenswrapper[5118]: I0121 00:34:03.041801 5118 generic.go:358] "Generic (PLEG): container finished" podID="d5dfa3ff-c2df-44c2-8c3c-a5ba37929520" containerID="be3a251c8e774b62c78373cac2bca3a694c14b8ed3d0086f9df49876f3a7b23b" exitCode=0 Jan 21 00:34:03 crc kubenswrapper[5118]: I0121 00:34:03.041908 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29482594-jwc5p" event={"ID":"d5dfa3ff-c2df-44c2-8c3c-a5ba37929520","Type":"ContainerDied","Data":"be3a251c8e774b62c78373cac2bca3a694c14b8ed3d0086f9df49876f3a7b23b"} Jan 21 00:34:03 crc kubenswrapper[5118]: I0121 00:34:03.801414 5118 patch_prober.go:28] interesting pod/machine-config-daemon-22r9n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 00:34:03 crc kubenswrapper[5118]: I0121 00:34:03.801487 5118 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 00:34:04 crc kubenswrapper[5118]: I0121 00:34:04.423663 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482594-jwc5p" Jan 21 00:34:04 crc kubenswrapper[5118]: I0121 00:34:04.533082 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4gtc4\" (UniqueName: \"kubernetes.io/projected/d5dfa3ff-c2df-44c2-8c3c-a5ba37929520-kube-api-access-4gtc4\") pod \"d5dfa3ff-c2df-44c2-8c3c-a5ba37929520\" (UID: \"d5dfa3ff-c2df-44c2-8c3c-a5ba37929520\") " Jan 21 00:34:04 crc kubenswrapper[5118]: I0121 00:34:04.547055 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5dfa3ff-c2df-44c2-8c3c-a5ba37929520-kube-api-access-4gtc4" (OuterVolumeSpecName: "kube-api-access-4gtc4") pod "d5dfa3ff-c2df-44c2-8c3c-a5ba37929520" (UID: "d5dfa3ff-c2df-44c2-8c3c-a5ba37929520"). InnerVolumeSpecName "kube-api-access-4gtc4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:34:04 crc kubenswrapper[5118]: I0121 00:34:04.634729 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4gtc4\" (UniqueName: \"kubernetes.io/projected/d5dfa3ff-c2df-44c2-8c3c-a5ba37929520-kube-api-access-4gtc4\") on node \"crc\" DevicePath \"\"" Jan 21 00:34:05 crc kubenswrapper[5118]: I0121 00:34:05.069404 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482594-jwc5p" Jan 21 00:34:05 crc kubenswrapper[5118]: I0121 00:34:05.069433 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29482594-jwc5p" event={"ID":"d5dfa3ff-c2df-44c2-8c3c-a5ba37929520","Type":"ContainerDied","Data":"3d235978c2c0aed6520bc11512bca477f09765e0583dd5e79c42b04b2f0af7d5"} Jan 21 00:34:05 crc kubenswrapper[5118]: I0121 00:34:05.069493 5118 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d235978c2c0aed6520bc11512bca477f09765e0583dd5e79c42b04b2f0af7d5" Jan 21 00:34:05 crc kubenswrapper[5118]: I0121 00:34:05.497019 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29482588-z9j84"] Jan 21 00:34:05 crc kubenswrapper[5118]: I0121 00:34:05.506106 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29482588-z9j84"] Jan 21 00:34:05 crc kubenswrapper[5118]: I0121 00:34:05.522515 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qcqwq_7c0390f5-26b4-4299-958c-acac058be619/kube-multus/0.log" Jan 21 00:34:05 crc kubenswrapper[5118]: I0121 00:34:05.523377 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qcqwq_7c0390f5-26b4-4299-958c-acac058be619/kube-multus/0.log" Jan 21 00:34:05 crc kubenswrapper[5118]: I0121 00:34:05.539095 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 21 00:34:05 crc kubenswrapper[5118]: I0121 00:34:05.539685 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 21 00:34:06 crc kubenswrapper[5118]: I0121 00:34:06.994241 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77fa341b-7377-4da6-b48c-f4e1d7c98f79" path="/var/lib/kubelet/pods/77fa341b-7377-4da6-b48c-f4e1d7c98f79/volumes" Jan 21 00:34:06 crc kubenswrapper[5118]: I0121 00:34:06.994649 5118 scope.go:117] "RemoveContainer" containerID="199adca0646d99849c89aa14eeb02e5709d11a30a55f26dfb2c51c6032ab455d" Jan 21 00:34:33 crc kubenswrapper[5118]: I0121 00:34:33.801283 5118 patch_prober.go:28] interesting pod/machine-config-daemon-22r9n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 00:34:33 crc kubenswrapper[5118]: I0121 00:34:33.802075 5118 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 00:35:02 crc kubenswrapper[5118]: I0121 00:35:02.694055 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vrxhx"] Jan 21 00:35:02 crc kubenswrapper[5118]: I0121 00:35:02.695291 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d5dfa3ff-c2df-44c2-8c3c-a5ba37929520" containerName="oc" Jan 21 00:35:02 crc kubenswrapper[5118]: I0121 00:35:02.695304 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5dfa3ff-c2df-44c2-8c3c-a5ba37929520" containerName="oc" Jan 21 00:35:02 crc kubenswrapper[5118]: I0121 00:35:02.695433 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="d5dfa3ff-c2df-44c2-8c3c-a5ba37929520" containerName="oc" Jan 21 00:35:02 crc kubenswrapper[5118]: I0121 00:35:02.703103 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vrxhx" Jan 21 00:35:02 crc kubenswrapper[5118]: I0121 00:35:02.710639 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vrxhx"] Jan 21 00:35:02 crc kubenswrapper[5118]: I0121 00:35:02.789504 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c434951f-cf03-4928-9612-6cae9bbdc1d2-catalog-content\") pod \"certified-operators-vrxhx\" (UID: \"c434951f-cf03-4928-9612-6cae9bbdc1d2\") " pod="openshift-marketplace/certified-operators-vrxhx" Jan 21 00:35:02 crc kubenswrapper[5118]: I0121 00:35:02.789597 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c434951f-cf03-4928-9612-6cae9bbdc1d2-utilities\") pod \"certified-operators-vrxhx\" (UID: \"c434951f-cf03-4928-9612-6cae9bbdc1d2\") " pod="openshift-marketplace/certified-operators-vrxhx" Jan 21 00:35:02 crc kubenswrapper[5118]: I0121 00:35:02.789643 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w92hb\" (UniqueName: \"kubernetes.io/projected/c434951f-cf03-4928-9612-6cae9bbdc1d2-kube-api-access-w92hb\") pod \"certified-operators-vrxhx\" (UID: \"c434951f-cf03-4928-9612-6cae9bbdc1d2\") " pod="openshift-marketplace/certified-operators-vrxhx" Jan 21 00:35:02 crc kubenswrapper[5118]: I0121 00:35:02.891108 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c434951f-cf03-4928-9612-6cae9bbdc1d2-catalog-content\") pod \"certified-operators-vrxhx\" (UID: \"c434951f-cf03-4928-9612-6cae9bbdc1d2\") " pod="openshift-marketplace/certified-operators-vrxhx" Jan 21 00:35:02 crc kubenswrapper[5118]: I0121 00:35:02.891178 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c434951f-cf03-4928-9612-6cae9bbdc1d2-utilities\") pod \"certified-operators-vrxhx\" (UID: \"c434951f-cf03-4928-9612-6cae9bbdc1d2\") " pod="openshift-marketplace/certified-operators-vrxhx" Jan 21 00:35:02 crc kubenswrapper[5118]: I0121 00:35:02.891208 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-w92hb\" (UniqueName: \"kubernetes.io/projected/c434951f-cf03-4928-9612-6cae9bbdc1d2-kube-api-access-w92hb\") pod \"certified-operators-vrxhx\" (UID: \"c434951f-cf03-4928-9612-6cae9bbdc1d2\") " pod="openshift-marketplace/certified-operators-vrxhx" Jan 21 00:35:02 crc kubenswrapper[5118]: I0121 00:35:02.892024 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c434951f-cf03-4928-9612-6cae9bbdc1d2-catalog-content\") pod \"certified-operators-vrxhx\" (UID: \"c434951f-cf03-4928-9612-6cae9bbdc1d2\") " pod="openshift-marketplace/certified-operators-vrxhx" Jan 21 00:35:02 crc kubenswrapper[5118]: I0121 00:35:02.892335 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c434951f-cf03-4928-9612-6cae9bbdc1d2-utilities\") pod \"certified-operators-vrxhx\" (UID: \"c434951f-cf03-4928-9612-6cae9bbdc1d2\") " pod="openshift-marketplace/certified-operators-vrxhx" Jan 21 00:35:02 crc kubenswrapper[5118]: I0121 00:35:02.914835 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-w92hb\" (UniqueName: \"kubernetes.io/projected/c434951f-cf03-4928-9612-6cae9bbdc1d2-kube-api-access-w92hb\") pod \"certified-operators-vrxhx\" (UID: \"c434951f-cf03-4928-9612-6cae9bbdc1d2\") " pod="openshift-marketplace/certified-operators-vrxhx" Jan 21 00:35:03 crc kubenswrapper[5118]: I0121 00:35:03.029985 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vrxhx" Jan 21 00:35:03 crc kubenswrapper[5118]: I0121 00:35:03.447632 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vrxhx"] Jan 21 00:35:03 crc kubenswrapper[5118]: I0121 00:35:03.449350 5118 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 00:35:03 crc kubenswrapper[5118]: I0121 00:35:03.801194 5118 patch_prober.go:28] interesting pod/machine-config-daemon-22r9n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 00:35:03 crc kubenswrapper[5118]: I0121 00:35:03.801463 5118 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 00:35:03 crc kubenswrapper[5118]: I0121 00:35:03.801535 5118 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" Jan 21 00:35:03 crc kubenswrapper[5118]: I0121 00:35:03.802083 5118 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3f34d1c6794faa2161633d81afe88f4f588963943ab90e805f7ce146e66ccc06"} pod="openshift-machine-config-operator/machine-config-daemon-22r9n" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 00:35:03 crc kubenswrapper[5118]: I0121 00:35:03.802128 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" containerName="machine-config-daemon" containerID="cri-o://3f34d1c6794faa2161633d81afe88f4f588963943ab90e805f7ce146e66ccc06" gracePeriod=600 Jan 21 00:35:04 crc kubenswrapper[5118]: I0121 00:35:04.110747 5118 generic.go:358] "Generic (PLEG): container finished" podID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" containerID="3f34d1c6794faa2161633d81afe88f4f588963943ab90e805f7ce146e66ccc06" exitCode=0 Jan 21 00:35:04 crc kubenswrapper[5118]: I0121 00:35:04.110936 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" event={"ID":"44eb9bc7-60a3-421c-bf5e-d1d9a5026435","Type":"ContainerDied","Data":"3f34d1c6794faa2161633d81afe88f4f588963943ab90e805f7ce146e66ccc06"} Jan 21 00:35:04 crc kubenswrapper[5118]: I0121 00:35:04.111212 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" event={"ID":"44eb9bc7-60a3-421c-bf5e-d1d9a5026435","Type":"ContainerStarted","Data":"e10204400bac64eb3427c96cbd1d21093a8afa9ce104aae1cdbf584140828f57"} Jan 21 00:35:04 crc kubenswrapper[5118]: I0121 00:35:04.111243 5118 scope.go:117] "RemoveContainer" containerID="197e7e77ca3ea92693e0cbb821721a50117819efe533cb1e47ad37d07b7e056e" Jan 21 00:35:04 crc kubenswrapper[5118]: I0121 00:35:04.115075 5118 generic.go:358] "Generic (PLEG): container finished" podID="c434951f-cf03-4928-9612-6cae9bbdc1d2" containerID="97a41cb1fdfa3416f8c6b3434a4a5d8b972bedb6184af4b03fdd66686015a0ec" exitCode=0 Jan 21 00:35:04 crc kubenswrapper[5118]: I0121 00:35:04.115284 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vrxhx" event={"ID":"c434951f-cf03-4928-9612-6cae9bbdc1d2","Type":"ContainerDied","Data":"97a41cb1fdfa3416f8c6b3434a4a5d8b972bedb6184af4b03fdd66686015a0ec"} Jan 21 00:35:04 crc kubenswrapper[5118]: I0121 00:35:04.115317 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vrxhx" event={"ID":"c434951f-cf03-4928-9612-6cae9bbdc1d2","Type":"ContainerStarted","Data":"752ebd64530a01bdb40cc06b619e8ba9cb3a2b109e9aa5062eac190dd852c70c"} Jan 21 00:35:06 crc kubenswrapper[5118]: I0121 00:35:06.143135 5118 generic.go:358] "Generic (PLEG): container finished" podID="c434951f-cf03-4928-9612-6cae9bbdc1d2" containerID="aa02ee3dc832047262a51d6aaa9259f162ad7b74fe489fcfabd18da24513dcfa" exitCode=0 Jan 21 00:35:06 crc kubenswrapper[5118]: I0121 00:35:06.143428 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vrxhx" event={"ID":"c434951f-cf03-4928-9612-6cae9bbdc1d2","Type":"ContainerDied","Data":"aa02ee3dc832047262a51d6aaa9259f162ad7b74fe489fcfabd18da24513dcfa"} Jan 21 00:35:07 crc kubenswrapper[5118]: I0121 00:35:07.153408 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vrxhx" event={"ID":"c434951f-cf03-4928-9612-6cae9bbdc1d2","Type":"ContainerStarted","Data":"c1ddafd7c95972070b72357cb3b917980e204370266e877cdccab24bea99ab8f"} Jan 21 00:35:07 crc kubenswrapper[5118]: I0121 00:35:07.184751 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vrxhx" podStartSLOduration=4.196104281 podStartE2EDuration="5.184724608s" podCreationTimestamp="2026-01-21 00:35:02 +0000 UTC" firstStartedPulling="2026-01-21 00:35:04.115721498 +0000 UTC m=+1559.439968516" lastFinishedPulling="2026-01-21 00:35:05.104341795 +0000 UTC m=+1560.428588843" observedRunningTime="2026-01-21 00:35:07.179239922 +0000 UTC m=+1562.503486940" watchObservedRunningTime="2026-01-21 00:35:07.184724608 +0000 UTC m=+1562.508971646" Jan 21 00:35:13 crc kubenswrapper[5118]: I0121 00:35:13.031081 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-vrxhx" Jan 21 00:35:13 crc kubenswrapper[5118]: I0121 00:35:13.031748 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vrxhx" Jan 21 00:35:13 crc kubenswrapper[5118]: I0121 00:35:13.101592 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vrxhx" Jan 21 00:35:13 crc kubenswrapper[5118]: I0121 00:35:13.269863 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vrxhx" Jan 21 00:35:13 crc kubenswrapper[5118]: I0121 00:35:13.336230 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vrxhx"] Jan 21 00:35:15 crc kubenswrapper[5118]: I0121 00:35:15.227533 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vrxhx" podUID="c434951f-cf03-4928-9612-6cae9bbdc1d2" containerName="registry-server" containerID="cri-o://c1ddafd7c95972070b72357cb3b917980e204370266e877cdccab24bea99ab8f" gracePeriod=2 Jan 21 00:35:15 crc kubenswrapper[5118]: I0121 00:35:15.638012 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vrxhx" Jan 21 00:35:15 crc kubenswrapper[5118]: I0121 00:35:15.733660 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w92hb\" (UniqueName: \"kubernetes.io/projected/c434951f-cf03-4928-9612-6cae9bbdc1d2-kube-api-access-w92hb\") pod \"c434951f-cf03-4928-9612-6cae9bbdc1d2\" (UID: \"c434951f-cf03-4928-9612-6cae9bbdc1d2\") " Jan 21 00:35:15 crc kubenswrapper[5118]: I0121 00:35:15.733733 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c434951f-cf03-4928-9612-6cae9bbdc1d2-utilities\") pod \"c434951f-cf03-4928-9612-6cae9bbdc1d2\" (UID: \"c434951f-cf03-4928-9612-6cae9bbdc1d2\") " Jan 21 00:35:15 crc kubenswrapper[5118]: I0121 00:35:15.733821 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c434951f-cf03-4928-9612-6cae9bbdc1d2-catalog-content\") pod \"c434951f-cf03-4928-9612-6cae9bbdc1d2\" (UID: \"c434951f-cf03-4928-9612-6cae9bbdc1d2\") " Jan 21 00:35:15 crc kubenswrapper[5118]: I0121 00:35:15.741999 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c434951f-cf03-4928-9612-6cae9bbdc1d2-utilities" (OuterVolumeSpecName: "utilities") pod "c434951f-cf03-4928-9612-6cae9bbdc1d2" (UID: "c434951f-cf03-4928-9612-6cae9bbdc1d2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 00:35:15 crc kubenswrapper[5118]: I0121 00:35:15.749129 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c434951f-cf03-4928-9612-6cae9bbdc1d2-kube-api-access-w92hb" (OuterVolumeSpecName: "kube-api-access-w92hb") pod "c434951f-cf03-4928-9612-6cae9bbdc1d2" (UID: "c434951f-cf03-4928-9612-6cae9bbdc1d2"). InnerVolumeSpecName "kube-api-access-w92hb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:35:15 crc kubenswrapper[5118]: I0121 00:35:15.772420 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c434951f-cf03-4928-9612-6cae9bbdc1d2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c434951f-cf03-4928-9612-6cae9bbdc1d2" (UID: "c434951f-cf03-4928-9612-6cae9bbdc1d2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 00:35:15 crc kubenswrapper[5118]: I0121 00:35:15.838355 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w92hb\" (UniqueName: \"kubernetes.io/projected/c434951f-cf03-4928-9612-6cae9bbdc1d2-kube-api-access-w92hb\") on node \"crc\" DevicePath \"\"" Jan 21 00:35:15 crc kubenswrapper[5118]: I0121 00:35:15.838397 5118 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c434951f-cf03-4928-9612-6cae9bbdc1d2-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 00:35:15 crc kubenswrapper[5118]: I0121 00:35:15.838409 5118 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c434951f-cf03-4928-9612-6cae9bbdc1d2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 00:35:16 crc kubenswrapper[5118]: I0121 00:35:16.239903 5118 generic.go:358] "Generic (PLEG): container finished" podID="c434951f-cf03-4928-9612-6cae9bbdc1d2" containerID="c1ddafd7c95972070b72357cb3b917980e204370266e877cdccab24bea99ab8f" exitCode=0 Jan 21 00:35:16 crc kubenswrapper[5118]: I0121 00:35:16.240026 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vrxhx" event={"ID":"c434951f-cf03-4928-9612-6cae9bbdc1d2","Type":"ContainerDied","Data":"c1ddafd7c95972070b72357cb3b917980e204370266e877cdccab24bea99ab8f"} Jan 21 00:35:16 crc kubenswrapper[5118]: I0121 00:35:16.240064 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vrxhx" Jan 21 00:35:16 crc kubenswrapper[5118]: I0121 00:35:16.240351 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vrxhx" event={"ID":"c434951f-cf03-4928-9612-6cae9bbdc1d2","Type":"ContainerDied","Data":"752ebd64530a01bdb40cc06b619e8ba9cb3a2b109e9aa5062eac190dd852c70c"} Jan 21 00:35:16 crc kubenswrapper[5118]: I0121 00:35:16.240382 5118 scope.go:117] "RemoveContainer" containerID="c1ddafd7c95972070b72357cb3b917980e204370266e877cdccab24bea99ab8f" Jan 21 00:35:16 crc kubenswrapper[5118]: I0121 00:35:16.273842 5118 scope.go:117] "RemoveContainer" containerID="aa02ee3dc832047262a51d6aaa9259f162ad7b74fe489fcfabd18da24513dcfa" Jan 21 00:35:16 crc kubenswrapper[5118]: I0121 00:35:16.293759 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vrxhx"] Jan 21 00:35:16 crc kubenswrapper[5118]: I0121 00:35:16.303165 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vrxhx"] Jan 21 00:35:16 crc kubenswrapper[5118]: I0121 00:35:16.323200 5118 scope.go:117] "RemoveContainer" containerID="97a41cb1fdfa3416f8c6b3434a4a5d8b972bedb6184af4b03fdd66686015a0ec" Jan 21 00:35:16 crc kubenswrapper[5118]: I0121 00:35:16.339492 5118 scope.go:117] "RemoveContainer" containerID="c1ddafd7c95972070b72357cb3b917980e204370266e877cdccab24bea99ab8f" Jan 21 00:35:16 crc kubenswrapper[5118]: E0121 00:35:16.339931 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1ddafd7c95972070b72357cb3b917980e204370266e877cdccab24bea99ab8f\": container with ID starting with c1ddafd7c95972070b72357cb3b917980e204370266e877cdccab24bea99ab8f not found: ID does not exist" containerID="c1ddafd7c95972070b72357cb3b917980e204370266e877cdccab24bea99ab8f" Jan 21 00:35:16 crc kubenswrapper[5118]: I0121 00:35:16.339981 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1ddafd7c95972070b72357cb3b917980e204370266e877cdccab24bea99ab8f"} err="failed to get container status \"c1ddafd7c95972070b72357cb3b917980e204370266e877cdccab24bea99ab8f\": rpc error: code = NotFound desc = could not find container \"c1ddafd7c95972070b72357cb3b917980e204370266e877cdccab24bea99ab8f\": container with ID starting with c1ddafd7c95972070b72357cb3b917980e204370266e877cdccab24bea99ab8f not found: ID does not exist" Jan 21 00:35:16 crc kubenswrapper[5118]: I0121 00:35:16.340014 5118 scope.go:117] "RemoveContainer" containerID="aa02ee3dc832047262a51d6aaa9259f162ad7b74fe489fcfabd18da24513dcfa" Jan 21 00:35:16 crc kubenswrapper[5118]: E0121 00:35:16.340316 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa02ee3dc832047262a51d6aaa9259f162ad7b74fe489fcfabd18da24513dcfa\": container with ID starting with aa02ee3dc832047262a51d6aaa9259f162ad7b74fe489fcfabd18da24513dcfa not found: ID does not exist" containerID="aa02ee3dc832047262a51d6aaa9259f162ad7b74fe489fcfabd18da24513dcfa" Jan 21 00:35:16 crc kubenswrapper[5118]: I0121 00:35:16.340352 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa02ee3dc832047262a51d6aaa9259f162ad7b74fe489fcfabd18da24513dcfa"} err="failed to get container status \"aa02ee3dc832047262a51d6aaa9259f162ad7b74fe489fcfabd18da24513dcfa\": rpc error: code = NotFound desc = could not find container \"aa02ee3dc832047262a51d6aaa9259f162ad7b74fe489fcfabd18da24513dcfa\": container with ID starting with aa02ee3dc832047262a51d6aaa9259f162ad7b74fe489fcfabd18da24513dcfa not found: ID does not exist" Jan 21 00:35:16 crc kubenswrapper[5118]: I0121 00:35:16.340380 5118 scope.go:117] "RemoveContainer" containerID="97a41cb1fdfa3416f8c6b3434a4a5d8b972bedb6184af4b03fdd66686015a0ec" Jan 21 00:35:16 crc kubenswrapper[5118]: E0121 00:35:16.340661 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97a41cb1fdfa3416f8c6b3434a4a5d8b972bedb6184af4b03fdd66686015a0ec\": container with ID starting with 97a41cb1fdfa3416f8c6b3434a4a5d8b972bedb6184af4b03fdd66686015a0ec not found: ID does not exist" containerID="97a41cb1fdfa3416f8c6b3434a4a5d8b972bedb6184af4b03fdd66686015a0ec" Jan 21 00:35:16 crc kubenswrapper[5118]: I0121 00:35:16.340722 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97a41cb1fdfa3416f8c6b3434a4a5d8b972bedb6184af4b03fdd66686015a0ec"} err="failed to get container status \"97a41cb1fdfa3416f8c6b3434a4a5d8b972bedb6184af4b03fdd66686015a0ec\": rpc error: code = NotFound desc = could not find container \"97a41cb1fdfa3416f8c6b3434a4a5d8b972bedb6184af4b03fdd66686015a0ec\": container with ID starting with 97a41cb1fdfa3416f8c6b3434a4a5d8b972bedb6184af4b03fdd66686015a0ec not found: ID does not exist" Jan 21 00:35:16 crc kubenswrapper[5118]: I0121 00:35:16.985510 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c434951f-cf03-4928-9612-6cae9bbdc1d2" path="/var/lib/kubelet/pods/c434951f-cf03-4928-9612-6cae9bbdc1d2/volumes" Jan 21 00:36:00 crc kubenswrapper[5118]: I0121 00:36:00.148215 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29482596-fjrts"] Jan 21 00:36:00 crc kubenswrapper[5118]: I0121 00:36:00.149432 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c434951f-cf03-4928-9612-6cae9bbdc1d2" containerName="registry-server" Jan 21 00:36:00 crc kubenswrapper[5118]: I0121 00:36:00.149446 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="c434951f-cf03-4928-9612-6cae9bbdc1d2" containerName="registry-server" Jan 21 00:36:00 crc kubenswrapper[5118]: I0121 00:36:00.149481 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c434951f-cf03-4928-9612-6cae9bbdc1d2" containerName="extract-utilities" Jan 21 00:36:00 crc kubenswrapper[5118]: I0121 00:36:00.149487 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="c434951f-cf03-4928-9612-6cae9bbdc1d2" containerName="extract-utilities" Jan 21 00:36:00 crc kubenswrapper[5118]: I0121 00:36:00.149497 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c434951f-cf03-4928-9612-6cae9bbdc1d2" containerName="extract-content" Jan 21 00:36:00 crc kubenswrapper[5118]: I0121 00:36:00.149502 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="c434951f-cf03-4928-9612-6cae9bbdc1d2" containerName="extract-content" Jan 21 00:36:00 crc kubenswrapper[5118]: I0121 00:36:00.149625 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="c434951f-cf03-4928-9612-6cae9bbdc1d2" containerName="registry-server" Jan 21 00:36:00 crc kubenswrapper[5118]: I0121 00:36:00.153069 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482596-fjrts" Jan 21 00:36:00 crc kubenswrapper[5118]: I0121 00:36:00.155585 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 00:36:00 crc kubenswrapper[5118]: I0121 00:36:00.155870 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 00:36:00 crc kubenswrapper[5118]: I0121 00:36:00.156265 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-tk7qc\"" Jan 21 00:36:00 crc kubenswrapper[5118]: I0121 00:36:00.157547 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29482596-fjrts"] Jan 21 00:36:00 crc kubenswrapper[5118]: I0121 00:36:00.222048 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4nzp\" (UniqueName: \"kubernetes.io/projected/6f6a9ced-1305-4655-82af-4a2beef6feb6-kube-api-access-k4nzp\") pod \"auto-csr-approver-29482596-fjrts\" (UID: \"6f6a9ced-1305-4655-82af-4a2beef6feb6\") " pod="openshift-infra/auto-csr-approver-29482596-fjrts" Jan 21 00:36:00 crc kubenswrapper[5118]: I0121 00:36:00.323801 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-k4nzp\" (UniqueName: \"kubernetes.io/projected/6f6a9ced-1305-4655-82af-4a2beef6feb6-kube-api-access-k4nzp\") pod \"auto-csr-approver-29482596-fjrts\" (UID: \"6f6a9ced-1305-4655-82af-4a2beef6feb6\") " pod="openshift-infra/auto-csr-approver-29482596-fjrts" Jan 21 00:36:00 crc kubenswrapper[5118]: I0121 00:36:00.350112 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4nzp\" (UniqueName: \"kubernetes.io/projected/6f6a9ced-1305-4655-82af-4a2beef6feb6-kube-api-access-k4nzp\") pod \"auto-csr-approver-29482596-fjrts\" (UID: \"6f6a9ced-1305-4655-82af-4a2beef6feb6\") " pod="openshift-infra/auto-csr-approver-29482596-fjrts" Jan 21 00:36:00 crc kubenswrapper[5118]: I0121 00:36:00.488126 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482596-fjrts" Jan 21 00:36:00 crc kubenswrapper[5118]: I0121 00:36:00.914523 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29482596-fjrts"] Jan 21 00:36:01 crc kubenswrapper[5118]: I0121 00:36:01.646531 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29482596-fjrts" event={"ID":"6f6a9ced-1305-4655-82af-4a2beef6feb6","Type":"ContainerStarted","Data":"226aed64fd6bd03e764224d08547adc4921f2771044694e1377b29cba8892c4d"} Jan 21 00:36:02 crc kubenswrapper[5118]: I0121 00:36:02.656359 5118 generic.go:358] "Generic (PLEG): container finished" podID="6f6a9ced-1305-4655-82af-4a2beef6feb6" containerID="977374be3ca424a36a1b1c7be9071090bfee64bfe2c0c55a7d280ed2020697c4" exitCode=0 Jan 21 00:36:02 crc kubenswrapper[5118]: I0121 00:36:02.656423 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29482596-fjrts" event={"ID":"6f6a9ced-1305-4655-82af-4a2beef6feb6","Type":"ContainerDied","Data":"977374be3ca424a36a1b1c7be9071090bfee64bfe2c0c55a7d280ed2020697c4"} Jan 21 00:36:03 crc kubenswrapper[5118]: I0121 00:36:03.914782 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482596-fjrts" Jan 21 00:36:03 crc kubenswrapper[5118]: I0121 00:36:03.980117 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k4nzp\" (UniqueName: \"kubernetes.io/projected/6f6a9ced-1305-4655-82af-4a2beef6feb6-kube-api-access-k4nzp\") pod \"6f6a9ced-1305-4655-82af-4a2beef6feb6\" (UID: \"6f6a9ced-1305-4655-82af-4a2beef6feb6\") " Jan 21 00:36:03 crc kubenswrapper[5118]: I0121 00:36:03.988728 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f6a9ced-1305-4655-82af-4a2beef6feb6-kube-api-access-k4nzp" (OuterVolumeSpecName: "kube-api-access-k4nzp") pod "6f6a9ced-1305-4655-82af-4a2beef6feb6" (UID: "6f6a9ced-1305-4655-82af-4a2beef6feb6"). InnerVolumeSpecName "kube-api-access-k4nzp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:36:04 crc kubenswrapper[5118]: I0121 00:36:04.082522 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-k4nzp\" (UniqueName: \"kubernetes.io/projected/6f6a9ced-1305-4655-82af-4a2beef6feb6-kube-api-access-k4nzp\") on node \"crc\" DevicePath \"\"" Jan 21 00:36:04 crc kubenswrapper[5118]: I0121 00:36:04.676706 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29482596-fjrts" event={"ID":"6f6a9ced-1305-4655-82af-4a2beef6feb6","Type":"ContainerDied","Data":"226aed64fd6bd03e764224d08547adc4921f2771044694e1377b29cba8892c4d"} Jan 21 00:36:04 crc kubenswrapper[5118]: I0121 00:36:04.676749 5118 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="226aed64fd6bd03e764224d08547adc4921f2771044694e1377b29cba8892c4d" Jan 21 00:36:04 crc kubenswrapper[5118]: I0121 00:36:04.676823 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482596-fjrts" Jan 21 00:36:05 crc kubenswrapper[5118]: I0121 00:36:05.004037 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29482590-qkpdj"] Jan 21 00:36:05 crc kubenswrapper[5118]: I0121 00:36:05.009878 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29482590-qkpdj"] Jan 21 00:36:06 crc kubenswrapper[5118]: I0121 00:36:06.984593 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e37f2c5-9d60-427a-ab18-95fc2402125e" path="/var/lib/kubelet/pods/3e37f2c5-9d60-427a-ab18-95fc2402125e/volumes" Jan 21 00:36:07 crc kubenswrapper[5118]: I0121 00:36:07.162627 5118 scope.go:117] "RemoveContainer" containerID="6567a89a2367a46ab640574581a4794dcd550e5ac9b7ef4342e375727537e202" Jan 21 00:37:33 crc kubenswrapper[5118]: I0121 00:37:33.801051 5118 patch_prober.go:28] interesting pod/machine-config-daemon-22r9n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 00:37:33 crc kubenswrapper[5118]: I0121 00:37:33.802029 5118 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 00:38:00 crc kubenswrapper[5118]: I0121 00:38:00.137813 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29482598-9mfxs"] Jan 21 00:38:00 crc kubenswrapper[5118]: I0121 00:38:00.139851 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6f6a9ced-1305-4655-82af-4a2beef6feb6" containerName="oc" Jan 21 00:38:00 crc kubenswrapper[5118]: I0121 00:38:00.139870 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f6a9ced-1305-4655-82af-4a2beef6feb6" containerName="oc" Jan 21 00:38:00 crc kubenswrapper[5118]: I0121 00:38:00.140067 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="6f6a9ced-1305-4655-82af-4a2beef6feb6" containerName="oc" Jan 21 00:38:00 crc kubenswrapper[5118]: I0121 00:38:00.150108 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482598-9mfxs" Jan 21 00:38:00 crc kubenswrapper[5118]: I0121 00:38:00.160993 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 00:38:00 crc kubenswrapper[5118]: I0121 00:38:00.161258 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29482598-9mfxs"] Jan 21 00:38:00 crc kubenswrapper[5118]: I0121 00:38:00.161305 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-tk7qc\"" Jan 21 00:38:00 crc kubenswrapper[5118]: I0121 00:38:00.161500 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 00:38:00 crc kubenswrapper[5118]: I0121 00:38:00.239941 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5qjq\" (UniqueName: \"kubernetes.io/projected/70e2e467-a6d5-4826-adf2-a7127fa6a71b-kube-api-access-k5qjq\") pod \"auto-csr-approver-29482598-9mfxs\" (UID: \"70e2e467-a6d5-4826-adf2-a7127fa6a71b\") " pod="openshift-infra/auto-csr-approver-29482598-9mfxs" Jan 21 00:38:00 crc kubenswrapper[5118]: I0121 00:38:00.341632 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-k5qjq\" (UniqueName: \"kubernetes.io/projected/70e2e467-a6d5-4826-adf2-a7127fa6a71b-kube-api-access-k5qjq\") pod \"auto-csr-approver-29482598-9mfxs\" (UID: \"70e2e467-a6d5-4826-adf2-a7127fa6a71b\") " pod="openshift-infra/auto-csr-approver-29482598-9mfxs" Jan 21 00:38:00 crc kubenswrapper[5118]: I0121 00:38:00.379803 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5qjq\" (UniqueName: \"kubernetes.io/projected/70e2e467-a6d5-4826-adf2-a7127fa6a71b-kube-api-access-k5qjq\") pod \"auto-csr-approver-29482598-9mfxs\" (UID: \"70e2e467-a6d5-4826-adf2-a7127fa6a71b\") " pod="openshift-infra/auto-csr-approver-29482598-9mfxs" Jan 21 00:38:00 crc kubenswrapper[5118]: I0121 00:38:00.482107 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482598-9mfxs" Jan 21 00:38:00 crc kubenswrapper[5118]: I0121 00:38:00.784095 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29482598-9mfxs"] Jan 21 00:38:00 crc kubenswrapper[5118]: I0121 00:38:00.820612 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29482598-9mfxs" event={"ID":"70e2e467-a6d5-4826-adf2-a7127fa6a71b","Type":"ContainerStarted","Data":"74925bac27f0dba5f7ac64e4c569960a3c3cbd378e7a40ad78e7d691d5d13a1a"} Jan 21 00:38:02 crc kubenswrapper[5118]: I0121 00:38:02.838796 5118 generic.go:358] "Generic (PLEG): container finished" podID="70e2e467-a6d5-4826-adf2-a7127fa6a71b" containerID="41599b6bd7b487d29452472f276b2067292bc514d0aa373028e324e4165f0c8d" exitCode=0 Jan 21 00:38:02 crc kubenswrapper[5118]: I0121 00:38:02.838939 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29482598-9mfxs" event={"ID":"70e2e467-a6d5-4826-adf2-a7127fa6a71b","Type":"ContainerDied","Data":"41599b6bd7b487d29452472f276b2067292bc514d0aa373028e324e4165f0c8d"} Jan 21 00:38:03 crc kubenswrapper[5118]: I0121 00:38:03.801283 5118 patch_prober.go:28] interesting pod/machine-config-daemon-22r9n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 00:38:03 crc kubenswrapper[5118]: I0121 00:38:03.801398 5118 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 00:38:04 crc kubenswrapper[5118]: I0121 00:38:04.143704 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482598-9mfxs" Jan 21 00:38:04 crc kubenswrapper[5118]: I0121 00:38:04.305357 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k5qjq\" (UniqueName: \"kubernetes.io/projected/70e2e467-a6d5-4826-adf2-a7127fa6a71b-kube-api-access-k5qjq\") pod \"70e2e467-a6d5-4826-adf2-a7127fa6a71b\" (UID: \"70e2e467-a6d5-4826-adf2-a7127fa6a71b\") " Jan 21 00:38:04 crc kubenswrapper[5118]: I0121 00:38:04.313974 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70e2e467-a6d5-4826-adf2-a7127fa6a71b-kube-api-access-k5qjq" (OuterVolumeSpecName: "kube-api-access-k5qjq") pod "70e2e467-a6d5-4826-adf2-a7127fa6a71b" (UID: "70e2e467-a6d5-4826-adf2-a7127fa6a71b"). InnerVolumeSpecName "kube-api-access-k5qjq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:38:04 crc kubenswrapper[5118]: I0121 00:38:04.407148 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-k5qjq\" (UniqueName: \"kubernetes.io/projected/70e2e467-a6d5-4826-adf2-a7127fa6a71b-kube-api-access-k5qjq\") on node \"crc\" DevicePath \"\"" Jan 21 00:38:04 crc kubenswrapper[5118]: I0121 00:38:04.856392 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29482598-9mfxs" event={"ID":"70e2e467-a6d5-4826-adf2-a7127fa6a71b","Type":"ContainerDied","Data":"74925bac27f0dba5f7ac64e4c569960a3c3cbd378e7a40ad78e7d691d5d13a1a"} Jan 21 00:38:04 crc kubenswrapper[5118]: I0121 00:38:04.856437 5118 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="74925bac27f0dba5f7ac64e4c569960a3c3cbd378e7a40ad78e7d691d5d13a1a" Jan 21 00:38:04 crc kubenswrapper[5118]: I0121 00:38:04.856509 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482598-9mfxs" Jan 21 00:38:05 crc kubenswrapper[5118]: I0121 00:38:05.223669 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29482592-v5bm8"] Jan 21 00:38:05 crc kubenswrapper[5118]: I0121 00:38:05.230506 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29482592-v5bm8"] Jan 21 00:38:06 crc kubenswrapper[5118]: I0121 00:38:06.988012 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="837aae9d-be20-46f0-ba02-aa898205931a" path="/var/lib/kubelet/pods/837aae9d-be20-46f0-ba02-aa898205931a/volumes" Jan 21 00:38:07 crc kubenswrapper[5118]: I0121 00:38:07.303886 5118 scope.go:117] "RemoveContainer" containerID="0255c10b7a8f23dbe54f1a4f0c402e22bbd15d646465764345aa0bce08eea88d" Jan 21 00:38:07 crc kubenswrapper[5118]: I0121 00:38:07.329240 5118 scope.go:117] "RemoveContainer" containerID="ddc73ea4213934997d1cc6ab03c67e0da0499d2fc8ee99e37c6f99d6d951dca0" Jan 21 00:38:07 crc kubenswrapper[5118]: I0121 00:38:07.351667 5118 scope.go:117] "RemoveContainer" containerID="2f55e13b9b3d2efedc61105118627e273958e4d3fe204edc910930de2e86aea0" Jan 21 00:38:07 crc kubenswrapper[5118]: I0121 00:38:07.388112 5118 scope.go:117] "RemoveContainer" containerID="d857a819e963c91b28887fa077c095874c4245a5d39a55d75c15dbad79a35bb9" Jan 21 00:38:33 crc kubenswrapper[5118]: I0121 00:38:33.801265 5118 patch_prober.go:28] interesting pod/machine-config-daemon-22r9n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 00:38:33 crc kubenswrapper[5118]: I0121 00:38:33.801906 5118 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 00:38:33 crc kubenswrapper[5118]: I0121 00:38:33.801957 5118 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" Jan 21 00:38:33 crc kubenswrapper[5118]: I0121 00:38:33.802758 5118 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e10204400bac64eb3427c96cbd1d21093a8afa9ce104aae1cdbf584140828f57"} pod="openshift-machine-config-operator/machine-config-daemon-22r9n" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 00:38:33 crc kubenswrapper[5118]: I0121 00:38:33.802820 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" containerName="machine-config-daemon" containerID="cri-o://e10204400bac64eb3427c96cbd1d21093a8afa9ce104aae1cdbf584140828f57" gracePeriod=600 Jan 21 00:38:34 crc kubenswrapper[5118]: E0121 00:38:34.026509 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-22r9n_openshift-machine-config-operator(44eb9bc7-60a3-421c-bf5e-d1d9a5026435)\"" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" Jan 21 00:38:34 crc kubenswrapper[5118]: I0121 00:38:34.130777 5118 generic.go:358] "Generic (PLEG): container finished" podID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" containerID="e10204400bac64eb3427c96cbd1d21093a8afa9ce104aae1cdbf584140828f57" exitCode=0 Jan 21 00:38:34 crc kubenswrapper[5118]: I0121 00:38:34.130940 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" event={"ID":"44eb9bc7-60a3-421c-bf5e-d1d9a5026435","Type":"ContainerDied","Data":"e10204400bac64eb3427c96cbd1d21093a8afa9ce104aae1cdbf584140828f57"} Jan 21 00:38:34 crc kubenswrapper[5118]: I0121 00:38:34.131000 5118 scope.go:117] "RemoveContainer" containerID="3f34d1c6794faa2161633d81afe88f4f588963943ab90e805f7ce146e66ccc06" Jan 21 00:38:34 crc kubenswrapper[5118]: I0121 00:38:34.131782 5118 scope.go:117] "RemoveContainer" containerID="e10204400bac64eb3427c96cbd1d21093a8afa9ce104aae1cdbf584140828f57" Jan 21 00:38:34 crc kubenswrapper[5118]: E0121 00:38:34.132248 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-22r9n_openshift-machine-config-operator(44eb9bc7-60a3-421c-bf5e-d1d9a5026435)\"" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" Jan 21 00:38:45 crc kubenswrapper[5118]: I0121 00:38:45.005778 5118 scope.go:117] "RemoveContainer" containerID="e10204400bac64eb3427c96cbd1d21093a8afa9ce104aae1cdbf584140828f57" Jan 21 00:38:45 crc kubenswrapper[5118]: E0121 00:38:45.006810 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-22r9n_openshift-machine-config-operator(44eb9bc7-60a3-421c-bf5e-d1d9a5026435)\"" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" Jan 21 00:38:50 crc kubenswrapper[5118]: I0121 00:38:50.621373 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9t7w8"] Jan 21 00:38:50 crc kubenswrapper[5118]: I0121 00:38:50.627962 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="70e2e467-a6d5-4826-adf2-a7127fa6a71b" containerName="oc" Jan 21 00:38:50 crc kubenswrapper[5118]: I0121 00:38:50.628228 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="70e2e467-a6d5-4826-adf2-a7127fa6a71b" containerName="oc" Jan 21 00:38:50 crc kubenswrapper[5118]: I0121 00:38:50.628667 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="70e2e467-a6d5-4826-adf2-a7127fa6a71b" containerName="oc" Jan 21 00:38:50 crc kubenswrapper[5118]: I0121 00:38:50.636855 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9t7w8" Jan 21 00:38:50 crc kubenswrapper[5118]: I0121 00:38:50.649354 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9t7w8"] Jan 21 00:38:50 crc kubenswrapper[5118]: I0121 00:38:50.702040 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8h4kc\" (UniqueName: \"kubernetes.io/projected/ab9826f9-2932-45e2-bcae-ccd72837a5c2-kube-api-access-8h4kc\") pod \"community-operators-9t7w8\" (UID: \"ab9826f9-2932-45e2-bcae-ccd72837a5c2\") " pod="openshift-marketplace/community-operators-9t7w8" Jan 21 00:38:50 crc kubenswrapper[5118]: I0121 00:38:50.702127 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab9826f9-2932-45e2-bcae-ccd72837a5c2-catalog-content\") pod \"community-operators-9t7w8\" (UID: \"ab9826f9-2932-45e2-bcae-ccd72837a5c2\") " pod="openshift-marketplace/community-operators-9t7w8" Jan 21 00:38:50 crc kubenswrapper[5118]: I0121 00:38:50.702218 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab9826f9-2932-45e2-bcae-ccd72837a5c2-utilities\") pod \"community-operators-9t7w8\" (UID: \"ab9826f9-2932-45e2-bcae-ccd72837a5c2\") " pod="openshift-marketplace/community-operators-9t7w8" Jan 21 00:38:50 crc kubenswrapper[5118]: I0121 00:38:50.803059 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8h4kc\" (UniqueName: \"kubernetes.io/projected/ab9826f9-2932-45e2-bcae-ccd72837a5c2-kube-api-access-8h4kc\") pod \"community-operators-9t7w8\" (UID: \"ab9826f9-2932-45e2-bcae-ccd72837a5c2\") " pod="openshift-marketplace/community-operators-9t7w8" Jan 21 00:38:50 crc kubenswrapper[5118]: I0121 00:38:50.803137 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab9826f9-2932-45e2-bcae-ccd72837a5c2-catalog-content\") pod \"community-operators-9t7w8\" (UID: \"ab9826f9-2932-45e2-bcae-ccd72837a5c2\") " pod="openshift-marketplace/community-operators-9t7w8" Jan 21 00:38:50 crc kubenswrapper[5118]: I0121 00:38:50.803183 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab9826f9-2932-45e2-bcae-ccd72837a5c2-utilities\") pod \"community-operators-9t7w8\" (UID: \"ab9826f9-2932-45e2-bcae-ccd72837a5c2\") " pod="openshift-marketplace/community-operators-9t7w8" Jan 21 00:38:50 crc kubenswrapper[5118]: I0121 00:38:50.803710 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab9826f9-2932-45e2-bcae-ccd72837a5c2-catalog-content\") pod \"community-operators-9t7w8\" (UID: \"ab9826f9-2932-45e2-bcae-ccd72837a5c2\") " pod="openshift-marketplace/community-operators-9t7w8" Jan 21 00:38:50 crc kubenswrapper[5118]: I0121 00:38:50.803767 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab9826f9-2932-45e2-bcae-ccd72837a5c2-utilities\") pod \"community-operators-9t7w8\" (UID: \"ab9826f9-2932-45e2-bcae-ccd72837a5c2\") " pod="openshift-marketplace/community-operators-9t7w8" Jan 21 00:38:50 crc kubenswrapper[5118]: I0121 00:38:50.826594 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8h4kc\" (UniqueName: \"kubernetes.io/projected/ab9826f9-2932-45e2-bcae-ccd72837a5c2-kube-api-access-8h4kc\") pod \"community-operators-9t7w8\" (UID: \"ab9826f9-2932-45e2-bcae-ccd72837a5c2\") " pod="openshift-marketplace/community-operators-9t7w8" Jan 21 00:38:50 crc kubenswrapper[5118]: I0121 00:38:50.965082 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9t7w8" Jan 21 00:38:51 crc kubenswrapper[5118]: I0121 00:38:51.271646 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9t7w8"] Jan 21 00:38:51 crc kubenswrapper[5118]: I0121 00:38:51.301420 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9t7w8" event={"ID":"ab9826f9-2932-45e2-bcae-ccd72837a5c2","Type":"ContainerStarted","Data":"8b71771a107816689dbe305490fc0c5fa7038da4f9f28e84271892d4122c472d"} Jan 21 00:38:52 crc kubenswrapper[5118]: I0121 00:38:52.309770 5118 generic.go:358] "Generic (PLEG): container finished" podID="ab9826f9-2932-45e2-bcae-ccd72837a5c2" containerID="1343b1f60bc23fe9fff5decbd67af33a6a7c8bfebd4bc6ba166cb35ba7eef632" exitCode=0 Jan 21 00:38:52 crc kubenswrapper[5118]: I0121 00:38:52.309863 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9t7w8" event={"ID":"ab9826f9-2932-45e2-bcae-ccd72837a5c2","Type":"ContainerDied","Data":"1343b1f60bc23fe9fff5decbd67af33a6a7c8bfebd4bc6ba166cb35ba7eef632"} Jan 21 00:38:53 crc kubenswrapper[5118]: I0121 00:38:53.320750 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9t7w8" event={"ID":"ab9826f9-2932-45e2-bcae-ccd72837a5c2","Type":"ContainerStarted","Data":"7150e48580947f3098d1d404aa65bb28eba25788fe6ef53255964142ba05f6a3"} Jan 21 00:38:54 crc kubenswrapper[5118]: I0121 00:38:54.327990 5118 generic.go:358] "Generic (PLEG): container finished" podID="ab9826f9-2932-45e2-bcae-ccd72837a5c2" containerID="7150e48580947f3098d1d404aa65bb28eba25788fe6ef53255964142ba05f6a3" exitCode=0 Jan 21 00:38:54 crc kubenswrapper[5118]: I0121 00:38:54.328045 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9t7w8" event={"ID":"ab9826f9-2932-45e2-bcae-ccd72837a5c2","Type":"ContainerDied","Data":"7150e48580947f3098d1d404aa65bb28eba25788fe6ef53255964142ba05f6a3"} Jan 21 00:38:55 crc kubenswrapper[5118]: I0121 00:38:55.337112 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9t7w8" event={"ID":"ab9826f9-2932-45e2-bcae-ccd72837a5c2","Type":"ContainerStarted","Data":"394a4d1b07041d1f04cea73353cf26aaaadfae94518e3c017a3c0df05e3c7277"} Jan 21 00:38:55 crc kubenswrapper[5118]: I0121 00:38:55.358126 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9t7w8" podStartSLOduration=4.791461992 podStartE2EDuration="5.358108307s" podCreationTimestamp="2026-01-21 00:38:50 +0000 UTC" firstStartedPulling="2026-01-21 00:38:52.310760393 +0000 UTC m=+1787.635007411" lastFinishedPulling="2026-01-21 00:38:52.877406708 +0000 UTC m=+1788.201653726" observedRunningTime="2026-01-21 00:38:55.356149265 +0000 UTC m=+1790.680396283" watchObservedRunningTime="2026-01-21 00:38:55.358108307 +0000 UTC m=+1790.682355325" Jan 21 00:38:57 crc kubenswrapper[5118]: I0121 00:38:57.976028 5118 scope.go:117] "RemoveContainer" containerID="e10204400bac64eb3427c96cbd1d21093a8afa9ce104aae1cdbf584140828f57" Jan 21 00:38:57 crc kubenswrapper[5118]: E0121 00:38:57.976607 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-22r9n_openshift-machine-config-operator(44eb9bc7-60a3-421c-bf5e-d1d9a5026435)\"" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" Jan 21 00:39:00 crc kubenswrapper[5118]: I0121 00:39:00.965641 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-9t7w8" Jan 21 00:39:00 crc kubenswrapper[5118]: I0121 00:39:00.965909 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9t7w8" Jan 21 00:39:01 crc kubenswrapper[5118]: I0121 00:39:01.020138 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9t7w8" Jan 21 00:39:01 crc kubenswrapper[5118]: I0121 00:39:01.445058 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9t7w8" Jan 21 00:39:01 crc kubenswrapper[5118]: I0121 00:39:01.501336 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9t7w8"] Jan 21 00:39:03 crc kubenswrapper[5118]: I0121 00:39:03.405678 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9t7w8" podUID="ab9826f9-2932-45e2-bcae-ccd72837a5c2" containerName="registry-server" containerID="cri-o://394a4d1b07041d1f04cea73353cf26aaaadfae94518e3c017a3c0df05e3c7277" gracePeriod=2 Jan 21 00:39:04 crc kubenswrapper[5118]: I0121 00:39:04.323141 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9t7w8" Jan 21 00:39:04 crc kubenswrapper[5118]: I0121 00:39:04.355694 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab9826f9-2932-45e2-bcae-ccd72837a5c2-utilities\") pod \"ab9826f9-2932-45e2-bcae-ccd72837a5c2\" (UID: \"ab9826f9-2932-45e2-bcae-ccd72837a5c2\") " Jan 21 00:39:04 crc kubenswrapper[5118]: I0121 00:39:04.355938 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8h4kc\" (UniqueName: \"kubernetes.io/projected/ab9826f9-2932-45e2-bcae-ccd72837a5c2-kube-api-access-8h4kc\") pod \"ab9826f9-2932-45e2-bcae-ccd72837a5c2\" (UID: \"ab9826f9-2932-45e2-bcae-ccd72837a5c2\") " Jan 21 00:39:04 crc kubenswrapper[5118]: I0121 00:39:04.355981 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab9826f9-2932-45e2-bcae-ccd72837a5c2-catalog-content\") pod \"ab9826f9-2932-45e2-bcae-ccd72837a5c2\" (UID: \"ab9826f9-2932-45e2-bcae-ccd72837a5c2\") " Jan 21 00:39:04 crc kubenswrapper[5118]: I0121 00:39:04.357911 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab9826f9-2932-45e2-bcae-ccd72837a5c2-utilities" (OuterVolumeSpecName: "utilities") pod "ab9826f9-2932-45e2-bcae-ccd72837a5c2" (UID: "ab9826f9-2932-45e2-bcae-ccd72837a5c2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 00:39:04 crc kubenswrapper[5118]: I0121 00:39:04.376449 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab9826f9-2932-45e2-bcae-ccd72837a5c2-kube-api-access-8h4kc" (OuterVolumeSpecName: "kube-api-access-8h4kc") pod "ab9826f9-2932-45e2-bcae-ccd72837a5c2" (UID: "ab9826f9-2932-45e2-bcae-ccd72837a5c2"). InnerVolumeSpecName "kube-api-access-8h4kc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:39:04 crc kubenswrapper[5118]: I0121 00:39:04.413405 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab9826f9-2932-45e2-bcae-ccd72837a5c2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ab9826f9-2932-45e2-bcae-ccd72837a5c2" (UID: "ab9826f9-2932-45e2-bcae-ccd72837a5c2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 00:39:04 crc kubenswrapper[5118]: I0121 00:39:04.416653 5118 generic.go:358] "Generic (PLEG): container finished" podID="ab9826f9-2932-45e2-bcae-ccd72837a5c2" containerID="394a4d1b07041d1f04cea73353cf26aaaadfae94518e3c017a3c0df05e3c7277" exitCode=0 Jan 21 00:39:04 crc kubenswrapper[5118]: I0121 00:39:04.416692 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9t7w8" event={"ID":"ab9826f9-2932-45e2-bcae-ccd72837a5c2","Type":"ContainerDied","Data":"394a4d1b07041d1f04cea73353cf26aaaadfae94518e3c017a3c0df05e3c7277"} Jan 21 00:39:04 crc kubenswrapper[5118]: I0121 00:39:04.416753 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9t7w8" event={"ID":"ab9826f9-2932-45e2-bcae-ccd72837a5c2","Type":"ContainerDied","Data":"8b71771a107816689dbe305490fc0c5fa7038da4f9f28e84271892d4122c472d"} Jan 21 00:39:04 crc kubenswrapper[5118]: I0121 00:39:04.416776 5118 scope.go:117] "RemoveContainer" containerID="394a4d1b07041d1f04cea73353cf26aaaadfae94518e3c017a3c0df05e3c7277" Jan 21 00:39:04 crc kubenswrapper[5118]: I0121 00:39:04.416840 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9t7w8" Jan 21 00:39:04 crc kubenswrapper[5118]: I0121 00:39:04.444795 5118 scope.go:117] "RemoveContainer" containerID="7150e48580947f3098d1d404aa65bb28eba25788fe6ef53255964142ba05f6a3" Jan 21 00:39:04 crc kubenswrapper[5118]: I0121 00:39:04.458811 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8h4kc\" (UniqueName: \"kubernetes.io/projected/ab9826f9-2932-45e2-bcae-ccd72837a5c2-kube-api-access-8h4kc\") on node \"crc\" DevicePath \"\"" Jan 21 00:39:04 crc kubenswrapper[5118]: I0121 00:39:04.458844 5118 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab9826f9-2932-45e2-bcae-ccd72837a5c2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 00:39:04 crc kubenswrapper[5118]: I0121 00:39:04.458856 5118 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab9826f9-2932-45e2-bcae-ccd72837a5c2-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 00:39:04 crc kubenswrapper[5118]: I0121 00:39:04.459319 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9t7w8"] Jan 21 00:39:04 crc kubenswrapper[5118]: I0121 00:39:04.464967 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9t7w8"] Jan 21 00:39:04 crc kubenswrapper[5118]: I0121 00:39:04.478484 5118 scope.go:117] "RemoveContainer" containerID="1343b1f60bc23fe9fff5decbd67af33a6a7c8bfebd4bc6ba166cb35ba7eef632" Jan 21 00:39:04 crc kubenswrapper[5118]: I0121 00:39:04.505840 5118 scope.go:117] "RemoveContainer" containerID="394a4d1b07041d1f04cea73353cf26aaaadfae94518e3c017a3c0df05e3c7277" Jan 21 00:39:04 crc kubenswrapper[5118]: E0121 00:39:04.506462 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"394a4d1b07041d1f04cea73353cf26aaaadfae94518e3c017a3c0df05e3c7277\": container with ID starting with 394a4d1b07041d1f04cea73353cf26aaaadfae94518e3c017a3c0df05e3c7277 not found: ID does not exist" containerID="394a4d1b07041d1f04cea73353cf26aaaadfae94518e3c017a3c0df05e3c7277" Jan 21 00:39:04 crc kubenswrapper[5118]: I0121 00:39:04.506500 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"394a4d1b07041d1f04cea73353cf26aaaadfae94518e3c017a3c0df05e3c7277"} err="failed to get container status \"394a4d1b07041d1f04cea73353cf26aaaadfae94518e3c017a3c0df05e3c7277\": rpc error: code = NotFound desc = could not find container \"394a4d1b07041d1f04cea73353cf26aaaadfae94518e3c017a3c0df05e3c7277\": container with ID starting with 394a4d1b07041d1f04cea73353cf26aaaadfae94518e3c017a3c0df05e3c7277 not found: ID does not exist" Jan 21 00:39:04 crc kubenswrapper[5118]: I0121 00:39:04.506522 5118 scope.go:117] "RemoveContainer" containerID="7150e48580947f3098d1d404aa65bb28eba25788fe6ef53255964142ba05f6a3" Jan 21 00:39:04 crc kubenswrapper[5118]: E0121 00:39:04.506856 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7150e48580947f3098d1d404aa65bb28eba25788fe6ef53255964142ba05f6a3\": container with ID starting with 7150e48580947f3098d1d404aa65bb28eba25788fe6ef53255964142ba05f6a3 not found: ID does not exist" containerID="7150e48580947f3098d1d404aa65bb28eba25788fe6ef53255964142ba05f6a3" Jan 21 00:39:04 crc kubenswrapper[5118]: I0121 00:39:04.506879 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7150e48580947f3098d1d404aa65bb28eba25788fe6ef53255964142ba05f6a3"} err="failed to get container status \"7150e48580947f3098d1d404aa65bb28eba25788fe6ef53255964142ba05f6a3\": rpc error: code = NotFound desc = could not find container \"7150e48580947f3098d1d404aa65bb28eba25788fe6ef53255964142ba05f6a3\": container with ID starting with 7150e48580947f3098d1d404aa65bb28eba25788fe6ef53255964142ba05f6a3 not found: ID does not exist" Jan 21 00:39:04 crc kubenswrapper[5118]: I0121 00:39:04.506896 5118 scope.go:117] "RemoveContainer" containerID="1343b1f60bc23fe9fff5decbd67af33a6a7c8bfebd4bc6ba166cb35ba7eef632" Jan 21 00:39:04 crc kubenswrapper[5118]: E0121 00:39:04.507913 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1343b1f60bc23fe9fff5decbd67af33a6a7c8bfebd4bc6ba166cb35ba7eef632\": container with ID starting with 1343b1f60bc23fe9fff5decbd67af33a6a7c8bfebd4bc6ba166cb35ba7eef632 not found: ID does not exist" containerID="1343b1f60bc23fe9fff5decbd67af33a6a7c8bfebd4bc6ba166cb35ba7eef632" Jan 21 00:39:04 crc kubenswrapper[5118]: I0121 00:39:04.507949 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1343b1f60bc23fe9fff5decbd67af33a6a7c8bfebd4bc6ba166cb35ba7eef632"} err="failed to get container status \"1343b1f60bc23fe9fff5decbd67af33a6a7c8bfebd4bc6ba166cb35ba7eef632\": rpc error: code = NotFound desc = could not find container \"1343b1f60bc23fe9fff5decbd67af33a6a7c8bfebd4bc6ba166cb35ba7eef632\": container with ID starting with 1343b1f60bc23fe9fff5decbd67af33a6a7c8bfebd4bc6ba166cb35ba7eef632 not found: ID does not exist" Jan 21 00:39:04 crc kubenswrapper[5118]: I0121 00:39:04.989817 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab9826f9-2932-45e2-bcae-ccd72837a5c2" path="/var/lib/kubelet/pods/ab9826f9-2932-45e2-bcae-ccd72837a5c2/volumes" Jan 21 00:39:05 crc kubenswrapper[5118]: I0121 00:39:05.684494 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qcqwq_7c0390f5-26b4-4299-958c-acac058be619/kube-multus/0.log" Jan 21 00:39:05 crc kubenswrapper[5118]: I0121 00:39:05.684574 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qcqwq_7c0390f5-26b4-4299-958c-acac058be619/kube-multus/0.log" Jan 21 00:39:05 crc kubenswrapper[5118]: I0121 00:39:05.702620 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 21 00:39:05 crc kubenswrapper[5118]: I0121 00:39:05.702684 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 21 00:39:08 crc kubenswrapper[5118]: I0121 00:39:08.977794 5118 scope.go:117] "RemoveContainer" containerID="e10204400bac64eb3427c96cbd1d21093a8afa9ce104aae1cdbf584140828f57" Jan 21 00:39:08 crc kubenswrapper[5118]: E0121 00:39:08.980382 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-22r9n_openshift-machine-config-operator(44eb9bc7-60a3-421c-bf5e-d1d9a5026435)\"" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" Jan 21 00:39:21 crc kubenswrapper[5118]: I0121 00:39:21.978142 5118 scope.go:117] "RemoveContainer" containerID="e10204400bac64eb3427c96cbd1d21093a8afa9ce104aae1cdbf584140828f57" Jan 21 00:39:21 crc kubenswrapper[5118]: E0121 00:39:21.979917 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-22r9n_openshift-machine-config-operator(44eb9bc7-60a3-421c-bf5e-d1d9a5026435)\"" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" Jan 21 00:39:34 crc kubenswrapper[5118]: I0121 00:39:34.993275 5118 scope.go:117] "RemoveContainer" containerID="e10204400bac64eb3427c96cbd1d21093a8afa9ce104aae1cdbf584140828f57" Jan 21 00:39:34 crc kubenswrapper[5118]: E0121 00:39:34.994539 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-22r9n_openshift-machine-config-operator(44eb9bc7-60a3-421c-bf5e-d1d9a5026435)\"" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" Jan 21 00:39:48 crc kubenswrapper[5118]: I0121 00:39:48.997524 5118 scope.go:117] "RemoveContainer" containerID="e10204400bac64eb3427c96cbd1d21093a8afa9ce104aae1cdbf584140828f57" Jan 21 00:39:49 crc kubenswrapper[5118]: E0121 00:39:48.998876 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-22r9n_openshift-machine-config-operator(44eb9bc7-60a3-421c-bf5e-d1d9a5026435)\"" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" Jan 21 00:39:59 crc kubenswrapper[5118]: I0121 00:39:59.975736 5118 scope.go:117] "RemoveContainer" containerID="e10204400bac64eb3427c96cbd1d21093a8afa9ce104aae1cdbf584140828f57" Jan 21 00:39:59 crc kubenswrapper[5118]: E0121 00:39:59.978476 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-22r9n_openshift-machine-config-operator(44eb9bc7-60a3-421c-bf5e-d1d9a5026435)\"" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" Jan 21 00:40:00 crc kubenswrapper[5118]: I0121 00:40:00.151087 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29482600-hcbz8"] Jan 21 00:40:00 crc kubenswrapper[5118]: I0121 00:40:00.151887 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ab9826f9-2932-45e2-bcae-ccd72837a5c2" containerName="extract-content" Jan 21 00:40:00 crc kubenswrapper[5118]: I0121 00:40:00.151910 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab9826f9-2932-45e2-bcae-ccd72837a5c2" containerName="extract-content" Jan 21 00:40:00 crc kubenswrapper[5118]: I0121 00:40:00.151955 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ab9826f9-2932-45e2-bcae-ccd72837a5c2" containerName="extract-utilities" Jan 21 00:40:00 crc kubenswrapper[5118]: I0121 00:40:00.151964 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab9826f9-2932-45e2-bcae-ccd72837a5c2" containerName="extract-utilities" Jan 21 00:40:00 crc kubenswrapper[5118]: I0121 00:40:00.152018 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ab9826f9-2932-45e2-bcae-ccd72837a5c2" containerName="registry-server" Jan 21 00:40:00 crc kubenswrapper[5118]: I0121 00:40:00.152026 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab9826f9-2932-45e2-bcae-ccd72837a5c2" containerName="registry-server" Jan 21 00:40:00 crc kubenswrapper[5118]: I0121 00:40:00.152188 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="ab9826f9-2932-45e2-bcae-ccd72837a5c2" containerName="registry-server" Jan 21 00:40:00 crc kubenswrapper[5118]: I0121 00:40:00.159250 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482600-hcbz8" Jan 21 00:40:00 crc kubenswrapper[5118]: I0121 00:40:00.162890 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 00:40:00 crc kubenswrapper[5118]: I0121 00:40:00.163341 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 00:40:00 crc kubenswrapper[5118]: I0121 00:40:00.164197 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-tk7qc\"" Jan 21 00:40:00 crc kubenswrapper[5118]: I0121 00:40:00.170003 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29482600-hcbz8"] Jan 21 00:40:00 crc kubenswrapper[5118]: I0121 00:40:00.189858 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fr8cn\" (UniqueName: \"kubernetes.io/projected/bd67781f-b2ee-450e-bdac-fe85f9014d24-kube-api-access-fr8cn\") pod \"auto-csr-approver-29482600-hcbz8\" (UID: \"bd67781f-b2ee-450e-bdac-fe85f9014d24\") " pod="openshift-infra/auto-csr-approver-29482600-hcbz8" Jan 21 00:40:00 crc kubenswrapper[5118]: I0121 00:40:00.290972 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fr8cn\" (UniqueName: \"kubernetes.io/projected/bd67781f-b2ee-450e-bdac-fe85f9014d24-kube-api-access-fr8cn\") pod \"auto-csr-approver-29482600-hcbz8\" (UID: \"bd67781f-b2ee-450e-bdac-fe85f9014d24\") " pod="openshift-infra/auto-csr-approver-29482600-hcbz8" Jan 21 00:40:00 crc kubenswrapper[5118]: I0121 00:40:00.315817 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fr8cn\" (UniqueName: \"kubernetes.io/projected/bd67781f-b2ee-450e-bdac-fe85f9014d24-kube-api-access-fr8cn\") pod \"auto-csr-approver-29482600-hcbz8\" (UID: \"bd67781f-b2ee-450e-bdac-fe85f9014d24\") " pod="openshift-infra/auto-csr-approver-29482600-hcbz8" Jan 21 00:40:00 crc kubenswrapper[5118]: I0121 00:40:00.523567 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482600-hcbz8" Jan 21 00:40:00 crc kubenswrapper[5118]: I0121 00:40:00.780832 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29482600-hcbz8"] Jan 21 00:40:00 crc kubenswrapper[5118]: I0121 00:40:00.988677 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29482600-hcbz8" event={"ID":"bd67781f-b2ee-450e-bdac-fe85f9014d24","Type":"ContainerStarted","Data":"cff2f80892ce2c765b89afed2fdb86ad6fd8b94039f3a6fa88c88cce2b740cf8"} Jan 21 00:40:03 crc kubenswrapper[5118]: I0121 00:40:03.011709 5118 generic.go:358] "Generic (PLEG): container finished" podID="bd67781f-b2ee-450e-bdac-fe85f9014d24" containerID="ee2d0cbc305365ef96f9d1209325e1afc9d45487a3f00cfe47acf9a547ad7ceb" exitCode=0 Jan 21 00:40:03 crc kubenswrapper[5118]: I0121 00:40:03.011843 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29482600-hcbz8" event={"ID":"bd67781f-b2ee-450e-bdac-fe85f9014d24","Type":"ContainerDied","Data":"ee2d0cbc305365ef96f9d1209325e1afc9d45487a3f00cfe47acf9a547ad7ceb"} Jan 21 00:40:04 crc kubenswrapper[5118]: I0121 00:40:04.370513 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482600-hcbz8" Jan 21 00:40:04 crc kubenswrapper[5118]: I0121 00:40:04.454680 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fr8cn\" (UniqueName: \"kubernetes.io/projected/bd67781f-b2ee-450e-bdac-fe85f9014d24-kube-api-access-fr8cn\") pod \"bd67781f-b2ee-450e-bdac-fe85f9014d24\" (UID: \"bd67781f-b2ee-450e-bdac-fe85f9014d24\") " Jan 21 00:40:04 crc kubenswrapper[5118]: I0121 00:40:04.461826 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd67781f-b2ee-450e-bdac-fe85f9014d24-kube-api-access-fr8cn" (OuterVolumeSpecName: "kube-api-access-fr8cn") pod "bd67781f-b2ee-450e-bdac-fe85f9014d24" (UID: "bd67781f-b2ee-450e-bdac-fe85f9014d24"). InnerVolumeSpecName "kube-api-access-fr8cn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:40:04 crc kubenswrapper[5118]: I0121 00:40:04.556512 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fr8cn\" (UniqueName: \"kubernetes.io/projected/bd67781f-b2ee-450e-bdac-fe85f9014d24-kube-api-access-fr8cn\") on node \"crc\" DevicePath \"\"" Jan 21 00:40:05 crc kubenswrapper[5118]: I0121 00:40:05.032213 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29482600-hcbz8" event={"ID":"bd67781f-b2ee-450e-bdac-fe85f9014d24","Type":"ContainerDied","Data":"cff2f80892ce2c765b89afed2fdb86ad6fd8b94039f3a6fa88c88cce2b740cf8"} Jan 21 00:40:05 crc kubenswrapper[5118]: I0121 00:40:05.032265 5118 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cff2f80892ce2c765b89afed2fdb86ad6fd8b94039f3a6fa88c88cce2b740cf8" Jan 21 00:40:05 crc kubenswrapper[5118]: I0121 00:40:05.032355 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482600-hcbz8" Jan 21 00:40:05 crc kubenswrapper[5118]: I0121 00:40:05.455976 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29482594-jwc5p"] Jan 21 00:40:05 crc kubenswrapper[5118]: I0121 00:40:05.462922 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29482594-jwc5p"] Jan 21 00:40:06 crc kubenswrapper[5118]: I0121 00:40:06.989200 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5dfa3ff-c2df-44c2-8c3c-a5ba37929520" path="/var/lib/kubelet/pods/d5dfa3ff-c2df-44c2-8c3c-a5ba37929520/volumes" Jan 21 00:40:07 crc kubenswrapper[5118]: I0121 00:40:07.564242 5118 scope.go:117] "RemoveContainer" containerID="be3a251c8e774b62c78373cac2bca3a694c14b8ed3d0086f9df49876f3a7b23b" Jan 21 00:40:13 crc kubenswrapper[5118]: I0121 00:40:13.975458 5118 scope.go:117] "RemoveContainer" containerID="e10204400bac64eb3427c96cbd1d21093a8afa9ce104aae1cdbf584140828f57" Jan 21 00:40:13 crc kubenswrapper[5118]: E0121 00:40:13.976445 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-22r9n_openshift-machine-config-operator(44eb9bc7-60a3-421c-bf5e-d1d9a5026435)\"" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" Jan 21 00:40:28 crc kubenswrapper[5118]: I0121 00:40:28.975757 5118 scope.go:117] "RemoveContainer" containerID="e10204400bac64eb3427c96cbd1d21093a8afa9ce104aae1cdbf584140828f57" Jan 21 00:40:28 crc kubenswrapper[5118]: E0121 00:40:28.976582 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-22r9n_openshift-machine-config-operator(44eb9bc7-60a3-421c-bf5e-d1d9a5026435)\"" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" Jan 21 00:40:42 crc kubenswrapper[5118]: I0121 00:40:42.977108 5118 scope.go:117] "RemoveContainer" containerID="e10204400bac64eb3427c96cbd1d21093a8afa9ce104aae1cdbf584140828f57" Jan 21 00:40:42 crc kubenswrapper[5118]: E0121 00:40:42.978406 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-22r9n_openshift-machine-config-operator(44eb9bc7-60a3-421c-bf5e-d1d9a5026435)\"" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" Jan 21 00:40:57 crc kubenswrapper[5118]: I0121 00:40:57.976004 5118 scope.go:117] "RemoveContainer" containerID="e10204400bac64eb3427c96cbd1d21093a8afa9ce104aae1cdbf584140828f57" Jan 21 00:40:57 crc kubenswrapper[5118]: E0121 00:40:57.976961 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-22r9n_openshift-machine-config-operator(44eb9bc7-60a3-421c-bf5e-d1d9a5026435)\"" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" Jan 21 00:41:08 crc kubenswrapper[5118]: I0121 00:41:08.975928 5118 scope.go:117] "RemoveContainer" containerID="e10204400bac64eb3427c96cbd1d21093a8afa9ce104aae1cdbf584140828f57" Jan 21 00:41:08 crc kubenswrapper[5118]: E0121 00:41:08.976865 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-22r9n_openshift-machine-config-operator(44eb9bc7-60a3-421c-bf5e-d1d9a5026435)\"" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" Jan 21 00:41:21 crc kubenswrapper[5118]: I0121 00:41:21.976408 5118 scope.go:117] "RemoveContainer" containerID="e10204400bac64eb3427c96cbd1d21093a8afa9ce104aae1cdbf584140828f57" Jan 21 00:41:21 crc kubenswrapper[5118]: E0121 00:41:21.977424 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-22r9n_openshift-machine-config-operator(44eb9bc7-60a3-421c-bf5e-d1d9a5026435)\"" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" Jan 21 00:41:33 crc kubenswrapper[5118]: I0121 00:41:33.975615 5118 scope.go:117] "RemoveContainer" containerID="e10204400bac64eb3427c96cbd1d21093a8afa9ce104aae1cdbf584140828f57" Jan 21 00:41:33 crc kubenswrapper[5118]: E0121 00:41:33.977597 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-22r9n_openshift-machine-config-operator(44eb9bc7-60a3-421c-bf5e-d1d9a5026435)\"" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" Jan 21 00:41:48 crc kubenswrapper[5118]: I0121 00:41:48.975695 5118 scope.go:117] "RemoveContainer" containerID="e10204400bac64eb3427c96cbd1d21093a8afa9ce104aae1cdbf584140828f57" Jan 21 00:41:48 crc kubenswrapper[5118]: E0121 00:41:48.976594 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-22r9n_openshift-machine-config-operator(44eb9bc7-60a3-421c-bf5e-d1d9a5026435)\"" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" Jan 21 00:41:59 crc kubenswrapper[5118]: I0121 00:41:59.977501 5118 scope.go:117] "RemoveContainer" containerID="e10204400bac64eb3427c96cbd1d21093a8afa9ce104aae1cdbf584140828f57" Jan 21 00:41:59 crc kubenswrapper[5118]: E0121 00:41:59.978456 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-22r9n_openshift-machine-config-operator(44eb9bc7-60a3-421c-bf5e-d1d9a5026435)\"" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" Jan 21 00:42:00 crc kubenswrapper[5118]: I0121 00:42:00.155237 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29482602-zwgtg"] Jan 21 00:42:00 crc kubenswrapper[5118]: I0121 00:42:00.156226 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="bd67781f-b2ee-450e-bdac-fe85f9014d24" containerName="oc" Jan 21 00:42:00 crc kubenswrapper[5118]: I0121 00:42:00.156251 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd67781f-b2ee-450e-bdac-fe85f9014d24" containerName="oc" Jan 21 00:42:00 crc kubenswrapper[5118]: I0121 00:42:00.156413 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="bd67781f-b2ee-450e-bdac-fe85f9014d24" containerName="oc" Jan 21 00:42:00 crc kubenswrapper[5118]: I0121 00:42:00.162381 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482602-zwgtg" Jan 21 00:42:00 crc kubenswrapper[5118]: I0121 00:42:00.163290 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29482602-zwgtg"] Jan 21 00:42:00 crc kubenswrapper[5118]: I0121 00:42:00.208634 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 00:42:00 crc kubenswrapper[5118]: I0121 00:42:00.209140 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-tk7qc\"" Jan 21 00:42:00 crc kubenswrapper[5118]: I0121 00:42:00.218974 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 00:42:00 crc kubenswrapper[5118]: I0121 00:42:00.263059 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtblg\" (UniqueName: \"kubernetes.io/projected/382c0b4c-851c-4cc6-aaa8-554ac4ebe397-kube-api-access-wtblg\") pod \"auto-csr-approver-29482602-zwgtg\" (UID: \"382c0b4c-851c-4cc6-aaa8-554ac4ebe397\") " pod="openshift-infra/auto-csr-approver-29482602-zwgtg" Jan 21 00:42:00 crc kubenswrapper[5118]: I0121 00:42:00.365350 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wtblg\" (UniqueName: \"kubernetes.io/projected/382c0b4c-851c-4cc6-aaa8-554ac4ebe397-kube-api-access-wtblg\") pod \"auto-csr-approver-29482602-zwgtg\" (UID: \"382c0b4c-851c-4cc6-aaa8-554ac4ebe397\") " pod="openshift-infra/auto-csr-approver-29482602-zwgtg" Jan 21 00:42:00 crc kubenswrapper[5118]: I0121 00:42:00.395189 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtblg\" (UniqueName: \"kubernetes.io/projected/382c0b4c-851c-4cc6-aaa8-554ac4ebe397-kube-api-access-wtblg\") pod \"auto-csr-approver-29482602-zwgtg\" (UID: \"382c0b4c-851c-4cc6-aaa8-554ac4ebe397\") " pod="openshift-infra/auto-csr-approver-29482602-zwgtg" Jan 21 00:42:00 crc kubenswrapper[5118]: I0121 00:42:00.537348 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482602-zwgtg" Jan 21 00:42:01 crc kubenswrapper[5118]: I0121 00:42:01.018010 5118 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 00:42:01 crc kubenswrapper[5118]: I0121 00:42:01.022874 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29482602-zwgtg"] Jan 21 00:42:01 crc kubenswrapper[5118]: I0121 00:42:01.303886 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29482602-zwgtg" event={"ID":"382c0b4c-851c-4cc6-aaa8-554ac4ebe397","Type":"ContainerStarted","Data":"0996a5dcb363e2fa0dcd1b19c092d665ac22d9da785efe9da3ad98f22cf4269d"} Jan 21 00:42:02 crc kubenswrapper[5118]: I0121 00:42:02.312805 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29482602-zwgtg" event={"ID":"382c0b4c-851c-4cc6-aaa8-554ac4ebe397","Type":"ContainerStarted","Data":"47f56e57b8541bfa0a392765ee78f1f5a0ece372aa3e8599a6ddb6d98a17d21b"} Jan 21 00:42:02 crc kubenswrapper[5118]: I0121 00:42:02.333133 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29482602-zwgtg" podStartSLOduration=1.354986926 podStartE2EDuration="2.33311083s" podCreationTimestamp="2026-01-21 00:42:00 +0000 UTC" firstStartedPulling="2026-01-21 00:42:01.01832323 +0000 UTC m=+1976.342570248" lastFinishedPulling="2026-01-21 00:42:01.996447134 +0000 UTC m=+1977.320694152" observedRunningTime="2026-01-21 00:42:02.326983267 +0000 UTC m=+1977.651230335" watchObservedRunningTime="2026-01-21 00:42:02.33311083 +0000 UTC m=+1977.657357858" Jan 21 00:42:03 crc kubenswrapper[5118]: I0121 00:42:03.322989 5118 generic.go:358] "Generic (PLEG): container finished" podID="382c0b4c-851c-4cc6-aaa8-554ac4ebe397" containerID="47f56e57b8541bfa0a392765ee78f1f5a0ece372aa3e8599a6ddb6d98a17d21b" exitCode=0 Jan 21 00:42:03 crc kubenswrapper[5118]: I0121 00:42:03.323097 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29482602-zwgtg" event={"ID":"382c0b4c-851c-4cc6-aaa8-554ac4ebe397","Type":"ContainerDied","Data":"47f56e57b8541bfa0a392765ee78f1f5a0ece372aa3e8599a6ddb6d98a17d21b"} Jan 21 00:42:04 crc kubenswrapper[5118]: I0121 00:42:04.597223 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482602-zwgtg" Jan 21 00:42:04 crc kubenswrapper[5118]: I0121 00:42:04.746671 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wtblg\" (UniqueName: \"kubernetes.io/projected/382c0b4c-851c-4cc6-aaa8-554ac4ebe397-kube-api-access-wtblg\") pod \"382c0b4c-851c-4cc6-aaa8-554ac4ebe397\" (UID: \"382c0b4c-851c-4cc6-aaa8-554ac4ebe397\") " Jan 21 00:42:04 crc kubenswrapper[5118]: I0121 00:42:04.755926 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/382c0b4c-851c-4cc6-aaa8-554ac4ebe397-kube-api-access-wtblg" (OuterVolumeSpecName: "kube-api-access-wtblg") pod "382c0b4c-851c-4cc6-aaa8-554ac4ebe397" (UID: "382c0b4c-851c-4cc6-aaa8-554ac4ebe397"). InnerVolumeSpecName "kube-api-access-wtblg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:42:04 crc kubenswrapper[5118]: I0121 00:42:04.848326 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wtblg\" (UniqueName: \"kubernetes.io/projected/382c0b4c-851c-4cc6-aaa8-554ac4ebe397-kube-api-access-wtblg\") on node \"crc\" DevicePath \"\"" Jan 21 00:42:05 crc kubenswrapper[5118]: I0121 00:42:05.345454 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29482602-zwgtg" event={"ID":"382c0b4c-851c-4cc6-aaa8-554ac4ebe397","Type":"ContainerDied","Data":"0996a5dcb363e2fa0dcd1b19c092d665ac22d9da785efe9da3ad98f22cf4269d"} Jan 21 00:42:05 crc kubenswrapper[5118]: I0121 00:42:05.345822 5118 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0996a5dcb363e2fa0dcd1b19c092d665ac22d9da785efe9da3ad98f22cf4269d" Jan 21 00:42:05 crc kubenswrapper[5118]: I0121 00:42:05.345956 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482602-zwgtg" Jan 21 00:42:05 crc kubenswrapper[5118]: I0121 00:42:05.395327 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29482596-fjrts"] Jan 21 00:42:05 crc kubenswrapper[5118]: I0121 00:42:05.399725 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29482596-fjrts"] Jan 21 00:42:06 crc kubenswrapper[5118]: I0121 00:42:06.990257 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f6a9ced-1305-4655-82af-4a2beef6feb6" path="/var/lib/kubelet/pods/6f6a9ced-1305-4655-82af-4a2beef6feb6/volumes" Jan 21 00:42:07 crc kubenswrapper[5118]: I0121 00:42:07.727955 5118 scope.go:117] "RemoveContainer" containerID="977374be3ca424a36a1b1c7be9071090bfee64bfe2c0c55a7d280ed2020697c4" Jan 21 00:42:14 crc kubenswrapper[5118]: I0121 00:42:14.991537 5118 scope.go:117] "RemoveContainer" containerID="e10204400bac64eb3427c96cbd1d21093a8afa9ce104aae1cdbf584140828f57" Jan 21 00:42:14 crc kubenswrapper[5118]: E0121 00:42:14.993439 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-22r9n_openshift-machine-config-operator(44eb9bc7-60a3-421c-bf5e-d1d9a5026435)\"" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" Jan 21 00:42:29 crc kubenswrapper[5118]: I0121 00:42:29.983004 5118 scope.go:117] "RemoveContainer" containerID="e10204400bac64eb3427c96cbd1d21093a8afa9ce104aae1cdbf584140828f57" Jan 21 00:42:29 crc kubenswrapper[5118]: E0121 00:42:29.988034 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-22r9n_openshift-machine-config-operator(44eb9bc7-60a3-421c-bf5e-d1d9a5026435)\"" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" Jan 21 00:42:36 crc kubenswrapper[5118]: I0121 00:42:36.998497 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-k82rb"] Jan 21 00:42:37 crc kubenswrapper[5118]: I0121 00:42:37.001412 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="382c0b4c-851c-4cc6-aaa8-554ac4ebe397" containerName="oc" Jan 21 00:42:37 crc kubenswrapper[5118]: I0121 00:42:37.001469 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="382c0b4c-851c-4cc6-aaa8-554ac4ebe397" containerName="oc" Jan 21 00:42:37 crc kubenswrapper[5118]: I0121 00:42:37.001804 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="382c0b4c-851c-4cc6-aaa8-554ac4ebe397" containerName="oc" Jan 21 00:42:37 crc kubenswrapper[5118]: I0121 00:42:37.021373 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k82rb"] Jan 21 00:42:37 crc kubenswrapper[5118]: I0121 00:42:37.021516 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k82rb" Jan 21 00:42:37 crc kubenswrapper[5118]: I0121 00:42:37.121447 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqh7w\" (UniqueName: \"kubernetes.io/projected/ee96db74-fbf0-4291-8810-bb03bba7253e-kube-api-access-fqh7w\") pod \"redhat-operators-k82rb\" (UID: \"ee96db74-fbf0-4291-8810-bb03bba7253e\") " pod="openshift-marketplace/redhat-operators-k82rb" Jan 21 00:42:37 crc kubenswrapper[5118]: I0121 00:42:37.121780 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee96db74-fbf0-4291-8810-bb03bba7253e-utilities\") pod \"redhat-operators-k82rb\" (UID: \"ee96db74-fbf0-4291-8810-bb03bba7253e\") " pod="openshift-marketplace/redhat-operators-k82rb" Jan 21 00:42:37 crc kubenswrapper[5118]: I0121 00:42:37.122103 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee96db74-fbf0-4291-8810-bb03bba7253e-catalog-content\") pod \"redhat-operators-k82rb\" (UID: \"ee96db74-fbf0-4291-8810-bb03bba7253e\") " pod="openshift-marketplace/redhat-operators-k82rb" Jan 21 00:42:37 crc kubenswrapper[5118]: I0121 00:42:37.223210 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee96db74-fbf0-4291-8810-bb03bba7253e-utilities\") pod \"redhat-operators-k82rb\" (UID: \"ee96db74-fbf0-4291-8810-bb03bba7253e\") " pod="openshift-marketplace/redhat-operators-k82rb" Jan 21 00:42:37 crc kubenswrapper[5118]: I0121 00:42:37.223289 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee96db74-fbf0-4291-8810-bb03bba7253e-catalog-content\") pod \"redhat-operators-k82rb\" (UID: \"ee96db74-fbf0-4291-8810-bb03bba7253e\") " pod="openshift-marketplace/redhat-operators-k82rb" Jan 21 00:42:37 crc kubenswrapper[5118]: I0121 00:42:37.223352 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fqh7w\" (UniqueName: \"kubernetes.io/projected/ee96db74-fbf0-4291-8810-bb03bba7253e-kube-api-access-fqh7w\") pod \"redhat-operators-k82rb\" (UID: \"ee96db74-fbf0-4291-8810-bb03bba7253e\") " pod="openshift-marketplace/redhat-operators-k82rb" Jan 21 00:42:37 crc kubenswrapper[5118]: I0121 00:42:37.223715 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee96db74-fbf0-4291-8810-bb03bba7253e-catalog-content\") pod \"redhat-operators-k82rb\" (UID: \"ee96db74-fbf0-4291-8810-bb03bba7253e\") " pod="openshift-marketplace/redhat-operators-k82rb" Jan 21 00:42:37 crc kubenswrapper[5118]: I0121 00:42:37.224022 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee96db74-fbf0-4291-8810-bb03bba7253e-utilities\") pod \"redhat-operators-k82rb\" (UID: \"ee96db74-fbf0-4291-8810-bb03bba7253e\") " pod="openshift-marketplace/redhat-operators-k82rb" Jan 21 00:42:37 crc kubenswrapper[5118]: I0121 00:42:37.244036 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqh7w\" (UniqueName: \"kubernetes.io/projected/ee96db74-fbf0-4291-8810-bb03bba7253e-kube-api-access-fqh7w\") pod \"redhat-operators-k82rb\" (UID: \"ee96db74-fbf0-4291-8810-bb03bba7253e\") " pod="openshift-marketplace/redhat-operators-k82rb" Jan 21 00:42:37 crc kubenswrapper[5118]: I0121 00:42:37.354624 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k82rb" Jan 21 00:42:37 crc kubenswrapper[5118]: I0121 00:42:37.785850 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k82rb"] Jan 21 00:42:38 crc kubenswrapper[5118]: I0121 00:42:38.641297 5118 generic.go:358] "Generic (PLEG): container finished" podID="ee96db74-fbf0-4291-8810-bb03bba7253e" containerID="e65f2082b2422ed4a3ba270fc9dbbaff0a46295635ea983e2a4f490027ed70cd" exitCode=0 Jan 21 00:42:38 crc kubenswrapper[5118]: I0121 00:42:38.641354 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k82rb" event={"ID":"ee96db74-fbf0-4291-8810-bb03bba7253e","Type":"ContainerDied","Data":"e65f2082b2422ed4a3ba270fc9dbbaff0a46295635ea983e2a4f490027ed70cd"} Jan 21 00:42:38 crc kubenswrapper[5118]: I0121 00:42:38.641391 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k82rb" event={"ID":"ee96db74-fbf0-4291-8810-bb03bba7253e","Type":"ContainerStarted","Data":"dd037fb39ba94971716eaffbff8e8751c88cb01adaf8c761fc28540ea147b1e1"} Jan 21 00:42:39 crc kubenswrapper[5118]: I0121 00:42:39.648594 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k82rb" event={"ID":"ee96db74-fbf0-4291-8810-bb03bba7253e","Type":"ContainerStarted","Data":"b997b20e7231c829296989ac2153a6d11489749e417c8e08d22d7b7f0b2b99c9"} Jan 21 00:42:40 crc kubenswrapper[5118]: I0121 00:42:40.658842 5118 generic.go:358] "Generic (PLEG): container finished" podID="ee96db74-fbf0-4291-8810-bb03bba7253e" containerID="b997b20e7231c829296989ac2153a6d11489749e417c8e08d22d7b7f0b2b99c9" exitCode=0 Jan 21 00:42:40 crc kubenswrapper[5118]: I0121 00:42:40.658996 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k82rb" event={"ID":"ee96db74-fbf0-4291-8810-bb03bba7253e","Type":"ContainerDied","Data":"b997b20e7231c829296989ac2153a6d11489749e417c8e08d22d7b7f0b2b99c9"} Jan 21 00:42:41 crc kubenswrapper[5118]: I0121 00:42:41.672786 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k82rb" event={"ID":"ee96db74-fbf0-4291-8810-bb03bba7253e","Type":"ContainerStarted","Data":"11015eb21e643155d909dd9e476cd039b3e7fb704bc9d07648c0f54ecedd1027"} Jan 21 00:42:41 crc kubenswrapper[5118]: I0121 00:42:41.701697 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-k82rb" podStartSLOduration=4.947448679 podStartE2EDuration="5.701666449s" podCreationTimestamp="2026-01-21 00:42:36 +0000 UTC" firstStartedPulling="2026-01-21 00:42:38.642109226 +0000 UTC m=+2013.966356244" lastFinishedPulling="2026-01-21 00:42:39.396326996 +0000 UTC m=+2014.720574014" observedRunningTime="2026-01-21 00:42:41.69155776 +0000 UTC m=+2017.015804798" watchObservedRunningTime="2026-01-21 00:42:41.701666449 +0000 UTC m=+2017.025913517" Jan 21 00:42:41 crc kubenswrapper[5118]: I0121 00:42:41.976263 5118 scope.go:117] "RemoveContainer" containerID="e10204400bac64eb3427c96cbd1d21093a8afa9ce104aae1cdbf584140828f57" Jan 21 00:42:41 crc kubenswrapper[5118]: E0121 00:42:41.976884 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-22r9n_openshift-machine-config-operator(44eb9bc7-60a3-421c-bf5e-d1d9a5026435)\"" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" Jan 21 00:42:47 crc kubenswrapper[5118]: I0121 00:42:47.356222 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-k82rb" Jan 21 00:42:47 crc kubenswrapper[5118]: I0121 00:42:47.358250 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-k82rb" Jan 21 00:42:48 crc kubenswrapper[5118]: I0121 00:42:48.425304 5118 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-k82rb" podUID="ee96db74-fbf0-4291-8810-bb03bba7253e" containerName="registry-server" probeResult="failure" output=< Jan 21 00:42:48 crc kubenswrapper[5118]: timeout: failed to connect service ":50051" within 1s Jan 21 00:42:48 crc kubenswrapper[5118]: > Jan 21 00:42:52 crc kubenswrapper[5118]: I0121 00:42:52.976854 5118 scope.go:117] "RemoveContainer" containerID="e10204400bac64eb3427c96cbd1d21093a8afa9ce104aae1cdbf584140828f57" Jan 21 00:42:52 crc kubenswrapper[5118]: E0121 00:42:52.981090 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-22r9n_openshift-machine-config-operator(44eb9bc7-60a3-421c-bf5e-d1d9a5026435)\"" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" Jan 21 00:42:57 crc kubenswrapper[5118]: I0121 00:42:57.419835 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-k82rb" Jan 21 00:42:57 crc kubenswrapper[5118]: I0121 00:42:57.468235 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-k82rb" Jan 21 00:42:57 crc kubenswrapper[5118]: I0121 00:42:57.660753 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k82rb"] Jan 21 00:42:58 crc kubenswrapper[5118]: I0121 00:42:58.830633 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-k82rb" podUID="ee96db74-fbf0-4291-8810-bb03bba7253e" containerName="registry-server" containerID="cri-o://11015eb21e643155d909dd9e476cd039b3e7fb704bc9d07648c0f54ecedd1027" gracePeriod=2 Jan 21 00:42:59 crc kubenswrapper[5118]: I0121 00:42:59.290806 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k82rb" Jan 21 00:42:59 crc kubenswrapper[5118]: I0121 00:42:59.323718 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqh7w\" (UniqueName: \"kubernetes.io/projected/ee96db74-fbf0-4291-8810-bb03bba7253e-kube-api-access-fqh7w\") pod \"ee96db74-fbf0-4291-8810-bb03bba7253e\" (UID: \"ee96db74-fbf0-4291-8810-bb03bba7253e\") " Jan 21 00:42:59 crc kubenswrapper[5118]: I0121 00:42:59.323858 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee96db74-fbf0-4291-8810-bb03bba7253e-catalog-content\") pod \"ee96db74-fbf0-4291-8810-bb03bba7253e\" (UID: \"ee96db74-fbf0-4291-8810-bb03bba7253e\") " Jan 21 00:42:59 crc kubenswrapper[5118]: I0121 00:42:59.323915 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee96db74-fbf0-4291-8810-bb03bba7253e-utilities\") pod \"ee96db74-fbf0-4291-8810-bb03bba7253e\" (UID: \"ee96db74-fbf0-4291-8810-bb03bba7253e\") " Jan 21 00:42:59 crc kubenswrapper[5118]: I0121 00:42:59.327336 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee96db74-fbf0-4291-8810-bb03bba7253e-utilities" (OuterVolumeSpecName: "utilities") pod "ee96db74-fbf0-4291-8810-bb03bba7253e" (UID: "ee96db74-fbf0-4291-8810-bb03bba7253e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 00:42:59 crc kubenswrapper[5118]: I0121 00:42:59.338458 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee96db74-fbf0-4291-8810-bb03bba7253e-kube-api-access-fqh7w" (OuterVolumeSpecName: "kube-api-access-fqh7w") pod "ee96db74-fbf0-4291-8810-bb03bba7253e" (UID: "ee96db74-fbf0-4291-8810-bb03bba7253e"). InnerVolumeSpecName "kube-api-access-fqh7w". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:42:59 crc kubenswrapper[5118]: I0121 00:42:59.427344 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fqh7w\" (UniqueName: \"kubernetes.io/projected/ee96db74-fbf0-4291-8810-bb03bba7253e-kube-api-access-fqh7w\") on node \"crc\" DevicePath \"\"" Jan 21 00:42:59 crc kubenswrapper[5118]: I0121 00:42:59.427404 5118 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee96db74-fbf0-4291-8810-bb03bba7253e-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 00:42:59 crc kubenswrapper[5118]: I0121 00:42:59.463296 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee96db74-fbf0-4291-8810-bb03bba7253e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ee96db74-fbf0-4291-8810-bb03bba7253e" (UID: "ee96db74-fbf0-4291-8810-bb03bba7253e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 00:42:59 crc kubenswrapper[5118]: I0121 00:42:59.529267 5118 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee96db74-fbf0-4291-8810-bb03bba7253e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 00:42:59 crc kubenswrapper[5118]: I0121 00:42:59.839841 5118 generic.go:358] "Generic (PLEG): container finished" podID="ee96db74-fbf0-4291-8810-bb03bba7253e" containerID="11015eb21e643155d909dd9e476cd039b3e7fb704bc9d07648c0f54ecedd1027" exitCode=0 Jan 21 00:42:59 crc kubenswrapper[5118]: I0121 00:42:59.839985 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k82rb" Jan 21 00:42:59 crc kubenswrapper[5118]: I0121 00:42:59.840427 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k82rb" event={"ID":"ee96db74-fbf0-4291-8810-bb03bba7253e","Type":"ContainerDied","Data":"11015eb21e643155d909dd9e476cd039b3e7fb704bc9d07648c0f54ecedd1027"} Jan 21 00:42:59 crc kubenswrapper[5118]: I0121 00:42:59.840474 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k82rb" event={"ID":"ee96db74-fbf0-4291-8810-bb03bba7253e","Type":"ContainerDied","Data":"dd037fb39ba94971716eaffbff8e8751c88cb01adaf8c761fc28540ea147b1e1"} Jan 21 00:42:59 crc kubenswrapper[5118]: I0121 00:42:59.840501 5118 scope.go:117] "RemoveContainer" containerID="11015eb21e643155d909dd9e476cd039b3e7fb704bc9d07648c0f54ecedd1027" Jan 21 00:42:59 crc kubenswrapper[5118]: I0121 00:42:59.864352 5118 scope.go:117] "RemoveContainer" containerID="b997b20e7231c829296989ac2153a6d11489749e417c8e08d22d7b7f0b2b99c9" Jan 21 00:42:59 crc kubenswrapper[5118]: I0121 00:42:59.877544 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k82rb"] Jan 21 00:42:59 crc kubenswrapper[5118]: I0121 00:42:59.886080 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-k82rb"] Jan 21 00:42:59 crc kubenswrapper[5118]: I0121 00:42:59.905690 5118 scope.go:117] "RemoveContainer" containerID="e65f2082b2422ed4a3ba270fc9dbbaff0a46295635ea983e2a4f490027ed70cd" Jan 21 00:42:59 crc kubenswrapper[5118]: I0121 00:42:59.936480 5118 scope.go:117] "RemoveContainer" containerID="11015eb21e643155d909dd9e476cd039b3e7fb704bc9d07648c0f54ecedd1027" Jan 21 00:42:59 crc kubenswrapper[5118]: E0121 00:42:59.937505 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11015eb21e643155d909dd9e476cd039b3e7fb704bc9d07648c0f54ecedd1027\": container with ID starting with 11015eb21e643155d909dd9e476cd039b3e7fb704bc9d07648c0f54ecedd1027 not found: ID does not exist" containerID="11015eb21e643155d909dd9e476cd039b3e7fb704bc9d07648c0f54ecedd1027" Jan 21 00:42:59 crc kubenswrapper[5118]: I0121 00:42:59.937563 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11015eb21e643155d909dd9e476cd039b3e7fb704bc9d07648c0f54ecedd1027"} err="failed to get container status \"11015eb21e643155d909dd9e476cd039b3e7fb704bc9d07648c0f54ecedd1027\": rpc error: code = NotFound desc = could not find container \"11015eb21e643155d909dd9e476cd039b3e7fb704bc9d07648c0f54ecedd1027\": container with ID starting with 11015eb21e643155d909dd9e476cd039b3e7fb704bc9d07648c0f54ecedd1027 not found: ID does not exist" Jan 21 00:42:59 crc kubenswrapper[5118]: I0121 00:42:59.937595 5118 scope.go:117] "RemoveContainer" containerID="b997b20e7231c829296989ac2153a6d11489749e417c8e08d22d7b7f0b2b99c9" Jan 21 00:42:59 crc kubenswrapper[5118]: E0121 00:42:59.938101 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b997b20e7231c829296989ac2153a6d11489749e417c8e08d22d7b7f0b2b99c9\": container with ID starting with b997b20e7231c829296989ac2153a6d11489749e417c8e08d22d7b7f0b2b99c9 not found: ID does not exist" containerID="b997b20e7231c829296989ac2153a6d11489749e417c8e08d22d7b7f0b2b99c9" Jan 21 00:42:59 crc kubenswrapper[5118]: I0121 00:42:59.938146 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b997b20e7231c829296989ac2153a6d11489749e417c8e08d22d7b7f0b2b99c9"} err="failed to get container status \"b997b20e7231c829296989ac2153a6d11489749e417c8e08d22d7b7f0b2b99c9\": rpc error: code = NotFound desc = could not find container \"b997b20e7231c829296989ac2153a6d11489749e417c8e08d22d7b7f0b2b99c9\": container with ID starting with b997b20e7231c829296989ac2153a6d11489749e417c8e08d22d7b7f0b2b99c9 not found: ID does not exist" Jan 21 00:42:59 crc kubenswrapper[5118]: I0121 00:42:59.938188 5118 scope.go:117] "RemoveContainer" containerID="e65f2082b2422ed4a3ba270fc9dbbaff0a46295635ea983e2a4f490027ed70cd" Jan 21 00:42:59 crc kubenswrapper[5118]: E0121 00:42:59.941283 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e65f2082b2422ed4a3ba270fc9dbbaff0a46295635ea983e2a4f490027ed70cd\": container with ID starting with e65f2082b2422ed4a3ba270fc9dbbaff0a46295635ea983e2a4f490027ed70cd not found: ID does not exist" containerID="e65f2082b2422ed4a3ba270fc9dbbaff0a46295635ea983e2a4f490027ed70cd" Jan 21 00:42:59 crc kubenswrapper[5118]: I0121 00:42:59.941335 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e65f2082b2422ed4a3ba270fc9dbbaff0a46295635ea983e2a4f490027ed70cd"} err="failed to get container status \"e65f2082b2422ed4a3ba270fc9dbbaff0a46295635ea983e2a4f490027ed70cd\": rpc error: code = NotFound desc = could not find container \"e65f2082b2422ed4a3ba270fc9dbbaff0a46295635ea983e2a4f490027ed70cd\": container with ID starting with e65f2082b2422ed4a3ba270fc9dbbaff0a46295635ea983e2a4f490027ed70cd not found: ID does not exist" Jan 21 00:43:00 crc kubenswrapper[5118]: I0121 00:43:00.989393 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee96db74-fbf0-4291-8810-bb03bba7253e" path="/var/lib/kubelet/pods/ee96db74-fbf0-4291-8810-bb03bba7253e/volumes" Jan 21 00:43:04 crc kubenswrapper[5118]: I0121 00:43:04.984749 5118 scope.go:117] "RemoveContainer" containerID="e10204400bac64eb3427c96cbd1d21093a8afa9ce104aae1cdbf584140828f57" Jan 21 00:43:04 crc kubenswrapper[5118]: E0121 00:43:04.986100 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-22r9n_openshift-machine-config-operator(44eb9bc7-60a3-421c-bf5e-d1d9a5026435)\"" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" Jan 21 00:43:15 crc kubenswrapper[5118]: I0121 00:43:15.976504 5118 scope.go:117] "RemoveContainer" containerID="e10204400bac64eb3427c96cbd1d21093a8afa9ce104aae1cdbf584140828f57" Jan 21 00:43:15 crc kubenswrapper[5118]: E0121 00:43:15.978359 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-22r9n_openshift-machine-config-operator(44eb9bc7-60a3-421c-bf5e-d1d9a5026435)\"" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" Jan 21 00:43:26 crc kubenswrapper[5118]: I0121 00:43:26.975957 5118 scope.go:117] "RemoveContainer" containerID="e10204400bac64eb3427c96cbd1d21093a8afa9ce104aae1cdbf584140828f57" Jan 21 00:43:26 crc kubenswrapper[5118]: E0121 00:43:26.977299 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-22r9n_openshift-machine-config-operator(44eb9bc7-60a3-421c-bf5e-d1d9a5026435)\"" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" Jan 21 00:43:37 crc kubenswrapper[5118]: I0121 00:43:37.975833 5118 scope.go:117] "RemoveContainer" containerID="e10204400bac64eb3427c96cbd1d21093a8afa9ce104aae1cdbf584140828f57" Jan 21 00:43:39 crc kubenswrapper[5118]: I0121 00:43:39.188110 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" event={"ID":"44eb9bc7-60a3-421c-bf5e-d1d9a5026435","Type":"ContainerStarted","Data":"ef4d29ff7544b6ec2a345aea13169974a9354ef86fcdc4898ed5dba5779dc9a6"} Jan 21 00:44:00 crc kubenswrapper[5118]: I0121 00:44:00.140015 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29482604-pml75"] Jan 21 00:44:00 crc kubenswrapper[5118]: I0121 00:44:00.141304 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ee96db74-fbf0-4291-8810-bb03bba7253e" containerName="extract-utilities" Jan 21 00:44:00 crc kubenswrapper[5118]: I0121 00:44:00.141317 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee96db74-fbf0-4291-8810-bb03bba7253e" containerName="extract-utilities" Jan 21 00:44:00 crc kubenswrapper[5118]: I0121 00:44:00.141354 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ee96db74-fbf0-4291-8810-bb03bba7253e" containerName="registry-server" Jan 21 00:44:00 crc kubenswrapper[5118]: I0121 00:44:00.141360 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee96db74-fbf0-4291-8810-bb03bba7253e" containerName="registry-server" Jan 21 00:44:00 crc kubenswrapper[5118]: I0121 00:44:00.141372 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ee96db74-fbf0-4291-8810-bb03bba7253e" containerName="extract-content" Jan 21 00:44:00 crc kubenswrapper[5118]: I0121 00:44:00.141378 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee96db74-fbf0-4291-8810-bb03bba7253e" containerName="extract-content" Jan 21 00:44:00 crc kubenswrapper[5118]: I0121 00:44:00.141493 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="ee96db74-fbf0-4291-8810-bb03bba7253e" containerName="registry-server" Jan 21 00:44:00 crc kubenswrapper[5118]: I0121 00:44:00.149560 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482604-pml75" Jan 21 00:44:00 crc kubenswrapper[5118]: I0121 00:44:00.155370 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 00:44:00 crc kubenswrapper[5118]: I0121 00:44:00.155616 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-tk7qc\"" Jan 21 00:44:00 crc kubenswrapper[5118]: I0121 00:44:00.155633 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29482604-pml75"] Jan 21 00:44:00 crc kubenswrapper[5118]: I0121 00:44:00.157480 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 00:44:00 crc kubenswrapper[5118]: I0121 00:44:00.305807 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccgrk\" (UniqueName: \"kubernetes.io/projected/9e1d66bc-063d-4211-a5d4-2e3f90aca915-kube-api-access-ccgrk\") pod \"auto-csr-approver-29482604-pml75\" (UID: \"9e1d66bc-063d-4211-a5d4-2e3f90aca915\") " pod="openshift-infra/auto-csr-approver-29482604-pml75" Jan 21 00:44:00 crc kubenswrapper[5118]: I0121 00:44:00.407369 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ccgrk\" (UniqueName: \"kubernetes.io/projected/9e1d66bc-063d-4211-a5d4-2e3f90aca915-kube-api-access-ccgrk\") pod \"auto-csr-approver-29482604-pml75\" (UID: \"9e1d66bc-063d-4211-a5d4-2e3f90aca915\") " pod="openshift-infra/auto-csr-approver-29482604-pml75" Jan 21 00:44:00 crc kubenswrapper[5118]: I0121 00:44:00.445100 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ccgrk\" (UniqueName: \"kubernetes.io/projected/9e1d66bc-063d-4211-a5d4-2e3f90aca915-kube-api-access-ccgrk\") pod \"auto-csr-approver-29482604-pml75\" (UID: \"9e1d66bc-063d-4211-a5d4-2e3f90aca915\") " pod="openshift-infra/auto-csr-approver-29482604-pml75" Jan 21 00:44:00 crc kubenswrapper[5118]: I0121 00:44:00.484865 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482604-pml75" Jan 21 00:44:00 crc kubenswrapper[5118]: I0121 00:44:00.749098 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29482604-pml75"] Jan 21 00:44:01 crc kubenswrapper[5118]: I0121 00:44:01.387374 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29482604-pml75" event={"ID":"9e1d66bc-063d-4211-a5d4-2e3f90aca915","Type":"ContainerStarted","Data":"f153e15c029bc3197bf81dd32c0611d66d3b5466883c0f4e18e8db42f0f5707f"} Jan 21 00:44:02 crc kubenswrapper[5118]: I0121 00:44:02.401672 5118 generic.go:358] "Generic (PLEG): container finished" podID="9e1d66bc-063d-4211-a5d4-2e3f90aca915" containerID="522647b963882e5b2dfb3ebd12943b7b2a47b678e0ed1e0a08a1811a1eb67a5b" exitCode=0 Jan 21 00:44:02 crc kubenswrapper[5118]: I0121 00:44:02.401822 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29482604-pml75" event={"ID":"9e1d66bc-063d-4211-a5d4-2e3f90aca915","Type":"ContainerDied","Data":"522647b963882e5b2dfb3ebd12943b7b2a47b678e0ed1e0a08a1811a1eb67a5b"} Jan 21 00:44:03 crc kubenswrapper[5118]: I0121 00:44:03.652714 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482604-pml75" Jan 21 00:44:03 crc kubenswrapper[5118]: I0121 00:44:03.771954 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ccgrk\" (UniqueName: \"kubernetes.io/projected/9e1d66bc-063d-4211-a5d4-2e3f90aca915-kube-api-access-ccgrk\") pod \"9e1d66bc-063d-4211-a5d4-2e3f90aca915\" (UID: \"9e1d66bc-063d-4211-a5d4-2e3f90aca915\") " Jan 21 00:44:03 crc kubenswrapper[5118]: I0121 00:44:03.779018 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e1d66bc-063d-4211-a5d4-2e3f90aca915-kube-api-access-ccgrk" (OuterVolumeSpecName: "kube-api-access-ccgrk") pod "9e1d66bc-063d-4211-a5d4-2e3f90aca915" (UID: "9e1d66bc-063d-4211-a5d4-2e3f90aca915"). InnerVolumeSpecName "kube-api-access-ccgrk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:44:03 crc kubenswrapper[5118]: I0121 00:44:03.873317 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ccgrk\" (UniqueName: \"kubernetes.io/projected/9e1d66bc-063d-4211-a5d4-2e3f90aca915-kube-api-access-ccgrk\") on node \"crc\" DevicePath \"\"" Jan 21 00:44:04 crc kubenswrapper[5118]: I0121 00:44:04.428020 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482604-pml75" Jan 21 00:44:04 crc kubenswrapper[5118]: I0121 00:44:04.428609 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29482604-pml75" event={"ID":"9e1d66bc-063d-4211-a5d4-2e3f90aca915","Type":"ContainerDied","Data":"f153e15c029bc3197bf81dd32c0611d66d3b5466883c0f4e18e8db42f0f5707f"} Jan 21 00:44:04 crc kubenswrapper[5118]: I0121 00:44:04.431511 5118 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f153e15c029bc3197bf81dd32c0611d66d3b5466883c0f4e18e8db42f0f5707f" Jan 21 00:44:04 crc kubenswrapper[5118]: I0121 00:44:04.718502 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29482598-9mfxs"] Jan 21 00:44:04 crc kubenswrapper[5118]: I0121 00:44:04.724120 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29482598-9mfxs"] Jan 21 00:44:04 crc kubenswrapper[5118]: I0121 00:44:04.985242 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70e2e467-a6d5-4826-adf2-a7127fa6a71b" path="/var/lib/kubelet/pods/70e2e467-a6d5-4826-adf2-a7127fa6a71b/volumes" Jan 21 00:44:05 crc kubenswrapper[5118]: I0121 00:44:05.800095 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qcqwq_7c0390f5-26b4-4299-958c-acac058be619/kube-multus/0.log" Jan 21 00:44:05 crc kubenswrapper[5118]: I0121 00:44:05.804799 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qcqwq_7c0390f5-26b4-4299-958c-acac058be619/kube-multus/0.log" Jan 21 00:44:05 crc kubenswrapper[5118]: I0121 00:44:05.813266 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 21 00:44:05 crc kubenswrapper[5118]: I0121 00:44:05.816179 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 21 00:44:07 crc kubenswrapper[5118]: I0121 00:44:07.875101 5118 scope.go:117] "RemoveContainer" containerID="41599b6bd7b487d29452472f276b2067292bc514d0aa373028e324e4165f0c8d" Jan 21 00:45:00 crc kubenswrapper[5118]: I0121 00:45:00.188655 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482605-4l7s5"] Jan 21 00:45:00 crc kubenswrapper[5118]: I0121 00:45:00.195003 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9e1d66bc-063d-4211-a5d4-2e3f90aca915" containerName="oc" Jan 21 00:45:00 crc kubenswrapper[5118]: I0121 00:45:00.195069 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e1d66bc-063d-4211-a5d4-2e3f90aca915" containerName="oc" Jan 21 00:45:00 crc kubenswrapper[5118]: I0121 00:45:00.196887 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="9e1d66bc-063d-4211-a5d4-2e3f90aca915" containerName="oc" Jan 21 00:45:00 crc kubenswrapper[5118]: I0121 00:45:00.202653 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482605-4l7s5"] Jan 21 00:45:00 crc kubenswrapper[5118]: I0121 00:45:00.203136 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482605-4l7s5" Jan 21 00:45:00 crc kubenswrapper[5118]: I0121 00:45:00.216010 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Jan 21 00:45:00 crc kubenswrapper[5118]: I0121 00:45:00.220248 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Jan 21 00:45:00 crc kubenswrapper[5118]: I0121 00:45:00.381957 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rg7z4\" (UniqueName: \"kubernetes.io/projected/5a9443a6-6748-452d-ad86-86f9ce2406d4-kube-api-access-rg7z4\") pod \"collect-profiles-29482605-4l7s5\" (UID: \"5a9443a6-6748-452d-ad86-86f9ce2406d4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482605-4l7s5" Jan 21 00:45:00 crc kubenswrapper[5118]: I0121 00:45:00.382005 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5a9443a6-6748-452d-ad86-86f9ce2406d4-secret-volume\") pod \"collect-profiles-29482605-4l7s5\" (UID: \"5a9443a6-6748-452d-ad86-86f9ce2406d4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482605-4l7s5" Jan 21 00:45:00 crc kubenswrapper[5118]: I0121 00:45:00.382037 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5a9443a6-6748-452d-ad86-86f9ce2406d4-config-volume\") pod \"collect-profiles-29482605-4l7s5\" (UID: \"5a9443a6-6748-452d-ad86-86f9ce2406d4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482605-4l7s5" Jan 21 00:45:00 crc kubenswrapper[5118]: I0121 00:45:00.483101 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rg7z4\" (UniqueName: \"kubernetes.io/projected/5a9443a6-6748-452d-ad86-86f9ce2406d4-kube-api-access-rg7z4\") pod \"collect-profiles-29482605-4l7s5\" (UID: \"5a9443a6-6748-452d-ad86-86f9ce2406d4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482605-4l7s5" Jan 21 00:45:00 crc kubenswrapper[5118]: I0121 00:45:00.483197 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5a9443a6-6748-452d-ad86-86f9ce2406d4-secret-volume\") pod \"collect-profiles-29482605-4l7s5\" (UID: \"5a9443a6-6748-452d-ad86-86f9ce2406d4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482605-4l7s5" Jan 21 00:45:00 crc kubenswrapper[5118]: I0121 00:45:00.483226 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5a9443a6-6748-452d-ad86-86f9ce2406d4-config-volume\") pod \"collect-profiles-29482605-4l7s5\" (UID: \"5a9443a6-6748-452d-ad86-86f9ce2406d4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482605-4l7s5" Jan 21 00:45:00 crc kubenswrapper[5118]: I0121 00:45:00.484187 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5a9443a6-6748-452d-ad86-86f9ce2406d4-config-volume\") pod \"collect-profiles-29482605-4l7s5\" (UID: \"5a9443a6-6748-452d-ad86-86f9ce2406d4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482605-4l7s5" Jan 21 00:45:00 crc kubenswrapper[5118]: I0121 00:45:00.490527 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5a9443a6-6748-452d-ad86-86f9ce2406d4-secret-volume\") pod \"collect-profiles-29482605-4l7s5\" (UID: \"5a9443a6-6748-452d-ad86-86f9ce2406d4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482605-4l7s5" Jan 21 00:45:00 crc kubenswrapper[5118]: I0121 00:45:00.502491 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rg7z4\" (UniqueName: \"kubernetes.io/projected/5a9443a6-6748-452d-ad86-86f9ce2406d4-kube-api-access-rg7z4\") pod \"collect-profiles-29482605-4l7s5\" (UID: \"5a9443a6-6748-452d-ad86-86f9ce2406d4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482605-4l7s5" Jan 21 00:45:00 crc kubenswrapper[5118]: I0121 00:45:00.532344 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482605-4l7s5" Jan 21 00:45:00 crc kubenswrapper[5118]: I0121 00:45:00.737317 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482605-4l7s5"] Jan 21 00:45:00 crc kubenswrapper[5118]: I0121 00:45:00.983351 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482605-4l7s5" event={"ID":"5a9443a6-6748-452d-ad86-86f9ce2406d4","Type":"ContainerStarted","Data":"11c3359e5688ddc06ed2a9014fcd73c3055df12446f5013a383197dd5146e6a8"} Jan 21 00:45:00 crc kubenswrapper[5118]: I0121 00:45:00.983402 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482605-4l7s5" event={"ID":"5a9443a6-6748-452d-ad86-86f9ce2406d4","Type":"ContainerStarted","Data":"9df5462b8e9ebfada221a067553f858c56b2893772d0e71aa17b79a7aee59920"} Jan 21 00:45:01 crc kubenswrapper[5118]: I0121 00:45:01.001406 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29482605-4l7s5" podStartSLOduration=1.001388774 podStartE2EDuration="1.001388774s" podCreationTimestamp="2026-01-21 00:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:45:00.996438613 +0000 UTC m=+2156.320685631" watchObservedRunningTime="2026-01-21 00:45:01.001388774 +0000 UTC m=+2156.325635792" Jan 21 00:45:02 crc kubenswrapper[5118]: I0121 00:45:02.000330 5118 generic.go:358] "Generic (PLEG): container finished" podID="5a9443a6-6748-452d-ad86-86f9ce2406d4" containerID="11c3359e5688ddc06ed2a9014fcd73c3055df12446f5013a383197dd5146e6a8" exitCode=0 Jan 21 00:45:02 crc kubenswrapper[5118]: I0121 00:45:02.000622 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482605-4l7s5" event={"ID":"5a9443a6-6748-452d-ad86-86f9ce2406d4","Type":"ContainerDied","Data":"11c3359e5688ddc06ed2a9014fcd73c3055df12446f5013a383197dd5146e6a8"} Jan 21 00:45:03 crc kubenswrapper[5118]: I0121 00:45:03.281375 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482605-4l7s5" Jan 21 00:45:03 crc kubenswrapper[5118]: I0121 00:45:03.320407 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5a9443a6-6748-452d-ad86-86f9ce2406d4-secret-volume\") pod \"5a9443a6-6748-452d-ad86-86f9ce2406d4\" (UID: \"5a9443a6-6748-452d-ad86-86f9ce2406d4\") " Jan 21 00:45:03 crc kubenswrapper[5118]: I0121 00:45:03.320662 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rg7z4\" (UniqueName: \"kubernetes.io/projected/5a9443a6-6748-452d-ad86-86f9ce2406d4-kube-api-access-rg7z4\") pod \"5a9443a6-6748-452d-ad86-86f9ce2406d4\" (UID: \"5a9443a6-6748-452d-ad86-86f9ce2406d4\") " Jan 21 00:45:03 crc kubenswrapper[5118]: I0121 00:45:03.320848 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5a9443a6-6748-452d-ad86-86f9ce2406d4-config-volume\") pod \"5a9443a6-6748-452d-ad86-86f9ce2406d4\" (UID: \"5a9443a6-6748-452d-ad86-86f9ce2406d4\") " Jan 21 00:45:03 crc kubenswrapper[5118]: I0121 00:45:03.321406 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a9443a6-6748-452d-ad86-86f9ce2406d4-config-volume" (OuterVolumeSpecName: "config-volume") pod "5a9443a6-6748-452d-ad86-86f9ce2406d4" (UID: "5a9443a6-6748-452d-ad86-86f9ce2406d4"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 00:45:03 crc kubenswrapper[5118]: I0121 00:45:03.326641 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a9443a6-6748-452d-ad86-86f9ce2406d4-kube-api-access-rg7z4" (OuterVolumeSpecName: "kube-api-access-rg7z4") pod "5a9443a6-6748-452d-ad86-86f9ce2406d4" (UID: "5a9443a6-6748-452d-ad86-86f9ce2406d4"). InnerVolumeSpecName "kube-api-access-rg7z4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:45:03 crc kubenswrapper[5118]: I0121 00:45:03.326755 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a9443a6-6748-452d-ad86-86f9ce2406d4-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "5a9443a6-6748-452d-ad86-86f9ce2406d4" (UID: "5a9443a6-6748-452d-ad86-86f9ce2406d4"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 00:45:03 crc kubenswrapper[5118]: I0121 00:45:03.422968 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rg7z4\" (UniqueName: \"kubernetes.io/projected/5a9443a6-6748-452d-ad86-86f9ce2406d4-kube-api-access-rg7z4\") on node \"crc\" DevicePath \"\"" Jan 21 00:45:03 crc kubenswrapper[5118]: I0121 00:45:03.423021 5118 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5a9443a6-6748-452d-ad86-86f9ce2406d4-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 00:45:03 crc kubenswrapper[5118]: I0121 00:45:03.423031 5118 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5a9443a6-6748-452d-ad86-86f9ce2406d4-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 00:45:04 crc kubenswrapper[5118]: I0121 00:45:04.021649 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482605-4l7s5" event={"ID":"5a9443a6-6748-452d-ad86-86f9ce2406d4","Type":"ContainerDied","Data":"9df5462b8e9ebfada221a067553f858c56b2893772d0e71aa17b79a7aee59920"} Jan 21 00:45:04 crc kubenswrapper[5118]: I0121 00:45:04.021692 5118 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9df5462b8e9ebfada221a067553f858c56b2893772d0e71aa17b79a7aee59920" Jan 21 00:45:04 crc kubenswrapper[5118]: I0121 00:45:04.021774 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482605-4l7s5" Jan 21 00:45:04 crc kubenswrapper[5118]: I0121 00:45:04.353277 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482560-6bcnb"] Jan 21 00:45:04 crc kubenswrapper[5118]: I0121 00:45:04.361800 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482560-6bcnb"] Jan 21 00:45:04 crc kubenswrapper[5118]: I0121 00:45:04.995750 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c693d48-122b-44a7-8257-f4f312e980aa" path="/var/lib/kubelet/pods/7c693d48-122b-44a7-8257-f4f312e980aa/volumes" Jan 21 00:45:08 crc kubenswrapper[5118]: I0121 00:45:08.027033 5118 scope.go:117] "RemoveContainer" containerID="2baf03122af0e60a3556f963c6ab1e5d2f09f592ff116073005a79888cd27156" Jan 21 00:46:00 crc kubenswrapper[5118]: I0121 00:46:00.152136 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29482606-wckr9"] Jan 21 00:46:00 crc kubenswrapper[5118]: I0121 00:46:00.154871 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5a9443a6-6748-452d-ad86-86f9ce2406d4" containerName="collect-profiles" Jan 21 00:46:00 crc kubenswrapper[5118]: I0121 00:46:00.154960 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a9443a6-6748-452d-ad86-86f9ce2406d4" containerName="collect-profiles" Jan 21 00:46:00 crc kubenswrapper[5118]: I0121 00:46:00.155146 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="5a9443a6-6748-452d-ad86-86f9ce2406d4" containerName="collect-profiles" Jan 21 00:46:00 crc kubenswrapper[5118]: I0121 00:46:00.159037 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482606-wckr9" Jan 21 00:46:00 crc kubenswrapper[5118]: I0121 00:46:00.162212 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 00:46:00 crc kubenswrapper[5118]: I0121 00:46:00.162404 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-tk7qc\"" Jan 21 00:46:00 crc kubenswrapper[5118]: I0121 00:46:00.162605 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 00:46:00 crc kubenswrapper[5118]: I0121 00:46:00.210222 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29482606-wckr9"] Jan 21 00:46:00 crc kubenswrapper[5118]: I0121 00:46:00.245552 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-885qh\" (UniqueName: \"kubernetes.io/projected/2cafbb17-ce21-411f-bab3-cc0bb0fdbf61-kube-api-access-885qh\") pod \"auto-csr-approver-29482606-wckr9\" (UID: \"2cafbb17-ce21-411f-bab3-cc0bb0fdbf61\") " pod="openshift-infra/auto-csr-approver-29482606-wckr9" Jan 21 00:46:00 crc kubenswrapper[5118]: I0121 00:46:00.347993 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-885qh\" (UniqueName: \"kubernetes.io/projected/2cafbb17-ce21-411f-bab3-cc0bb0fdbf61-kube-api-access-885qh\") pod \"auto-csr-approver-29482606-wckr9\" (UID: \"2cafbb17-ce21-411f-bab3-cc0bb0fdbf61\") " pod="openshift-infra/auto-csr-approver-29482606-wckr9" Jan 21 00:46:00 crc kubenswrapper[5118]: I0121 00:46:00.387045 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-885qh\" (UniqueName: \"kubernetes.io/projected/2cafbb17-ce21-411f-bab3-cc0bb0fdbf61-kube-api-access-885qh\") pod \"auto-csr-approver-29482606-wckr9\" (UID: \"2cafbb17-ce21-411f-bab3-cc0bb0fdbf61\") " pod="openshift-infra/auto-csr-approver-29482606-wckr9" Jan 21 00:46:00 crc kubenswrapper[5118]: I0121 00:46:00.516717 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482606-wckr9" Jan 21 00:46:00 crc kubenswrapper[5118]: I0121 00:46:00.818018 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29482606-wckr9"] Jan 21 00:46:01 crc kubenswrapper[5118]: I0121 00:46:01.547086 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29482606-wckr9" event={"ID":"2cafbb17-ce21-411f-bab3-cc0bb0fdbf61","Type":"ContainerStarted","Data":"bbb08b067d68f301c513b680d6ff52dd59f21bfa981bb09e6189a00f4ec8cdc5"} Jan 21 00:46:02 crc kubenswrapper[5118]: I0121 00:46:02.559628 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29482606-wckr9" event={"ID":"2cafbb17-ce21-411f-bab3-cc0bb0fdbf61","Type":"ContainerStarted","Data":"8f8d4087e78c385391abda6f2e54424e7d7d653fc1a080480ba9e41cffdee4d0"} Jan 21 00:46:02 crc kubenswrapper[5118]: I0121 00:46:02.584177 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29482606-wckr9" podStartSLOduration=1.381203692 podStartE2EDuration="2.584137588s" podCreationTimestamp="2026-01-21 00:46:00 +0000 UTC" firstStartedPulling="2026-01-21 00:46:00.831566463 +0000 UTC m=+2216.155813481" lastFinishedPulling="2026-01-21 00:46:02.034500319 +0000 UTC m=+2217.358747377" observedRunningTime="2026-01-21 00:46:02.579387172 +0000 UTC m=+2217.903634230" watchObservedRunningTime="2026-01-21 00:46:02.584137588 +0000 UTC m=+2217.908384606" Jan 21 00:46:03 crc kubenswrapper[5118]: I0121 00:46:03.572590 5118 generic.go:358] "Generic (PLEG): container finished" podID="2cafbb17-ce21-411f-bab3-cc0bb0fdbf61" containerID="8f8d4087e78c385391abda6f2e54424e7d7d653fc1a080480ba9e41cffdee4d0" exitCode=0 Jan 21 00:46:03 crc kubenswrapper[5118]: I0121 00:46:03.572822 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29482606-wckr9" event={"ID":"2cafbb17-ce21-411f-bab3-cc0bb0fdbf61","Type":"ContainerDied","Data":"8f8d4087e78c385391abda6f2e54424e7d7d653fc1a080480ba9e41cffdee4d0"} Jan 21 00:46:03 crc kubenswrapper[5118]: I0121 00:46:03.800913 5118 patch_prober.go:28] interesting pod/machine-config-daemon-22r9n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 00:46:03 crc kubenswrapper[5118]: I0121 00:46:03.801043 5118 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 00:46:04 crc kubenswrapper[5118]: I0121 00:46:04.960344 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482606-wckr9" Jan 21 00:46:04 crc kubenswrapper[5118]: I0121 00:46:04.961966 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-885qh\" (UniqueName: \"kubernetes.io/projected/2cafbb17-ce21-411f-bab3-cc0bb0fdbf61-kube-api-access-885qh\") pod \"2cafbb17-ce21-411f-bab3-cc0bb0fdbf61\" (UID: \"2cafbb17-ce21-411f-bab3-cc0bb0fdbf61\") " Jan 21 00:46:05 crc kubenswrapper[5118]: I0121 00:46:05.017095 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cafbb17-ce21-411f-bab3-cc0bb0fdbf61-kube-api-access-885qh" (OuterVolumeSpecName: "kube-api-access-885qh") pod "2cafbb17-ce21-411f-bab3-cc0bb0fdbf61" (UID: "2cafbb17-ce21-411f-bab3-cc0bb0fdbf61"). InnerVolumeSpecName "kube-api-access-885qh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:46:05 crc kubenswrapper[5118]: I0121 00:46:05.063711 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-885qh\" (UniqueName: \"kubernetes.io/projected/2cafbb17-ce21-411f-bab3-cc0bb0fdbf61-kube-api-access-885qh\") on node \"crc\" DevicePath \"\"" Jan 21 00:46:05 crc kubenswrapper[5118]: I0121 00:46:05.600173 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482606-wckr9" Jan 21 00:46:05 crc kubenswrapper[5118]: I0121 00:46:05.600260 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29482606-wckr9" event={"ID":"2cafbb17-ce21-411f-bab3-cc0bb0fdbf61","Type":"ContainerDied","Data":"bbb08b067d68f301c513b680d6ff52dd59f21bfa981bb09e6189a00f4ec8cdc5"} Jan 21 00:46:05 crc kubenswrapper[5118]: I0121 00:46:05.600312 5118 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bbb08b067d68f301c513b680d6ff52dd59f21bfa981bb09e6189a00f4ec8cdc5" Jan 21 00:46:05 crc kubenswrapper[5118]: I0121 00:46:05.647725 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29482600-hcbz8"] Jan 21 00:46:05 crc kubenswrapper[5118]: I0121 00:46:05.652961 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29482600-hcbz8"] Jan 21 00:46:06 crc kubenswrapper[5118]: I0121 00:46:06.990805 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd67781f-b2ee-450e-bdac-fe85f9014d24" path="/var/lib/kubelet/pods/bd67781f-b2ee-450e-bdac-fe85f9014d24/volumes" Jan 21 00:46:08 crc kubenswrapper[5118]: I0121 00:46:08.083104 5118 scope.go:117] "RemoveContainer" containerID="ee2d0cbc305365ef96f9d1209325e1afc9d45487a3f00cfe47acf9a547ad7ceb" Jan 21 00:46:33 crc kubenswrapper[5118]: I0121 00:46:33.801053 5118 patch_prober.go:28] interesting pod/machine-config-daemon-22r9n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 00:46:33 crc kubenswrapper[5118]: I0121 00:46:33.801677 5118 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 00:47:03 crc kubenswrapper[5118]: I0121 00:47:03.803788 5118 patch_prober.go:28] interesting pod/machine-config-daemon-22r9n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 00:47:03 crc kubenswrapper[5118]: I0121 00:47:03.804778 5118 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 00:47:03 crc kubenswrapper[5118]: I0121 00:47:03.804851 5118 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" Jan 21 00:47:03 crc kubenswrapper[5118]: I0121 00:47:03.805799 5118 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ef4d29ff7544b6ec2a345aea13169974a9354ef86fcdc4898ed5dba5779dc9a6"} pod="openshift-machine-config-operator/machine-config-daemon-22r9n" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 00:47:03 crc kubenswrapper[5118]: I0121 00:47:03.805899 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" containerName="machine-config-daemon" containerID="cri-o://ef4d29ff7544b6ec2a345aea13169974a9354ef86fcdc4898ed5dba5779dc9a6" gracePeriod=600 Jan 21 00:47:03 crc kubenswrapper[5118]: I0121 00:47:03.937306 5118 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 00:47:04 crc kubenswrapper[5118]: I0121 00:47:04.213685 5118 generic.go:358] "Generic (PLEG): container finished" podID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" containerID="ef4d29ff7544b6ec2a345aea13169974a9354ef86fcdc4898ed5dba5779dc9a6" exitCode=0 Jan 21 00:47:04 crc kubenswrapper[5118]: I0121 00:47:04.213777 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" event={"ID":"44eb9bc7-60a3-421c-bf5e-d1d9a5026435","Type":"ContainerDied","Data":"ef4d29ff7544b6ec2a345aea13169974a9354ef86fcdc4898ed5dba5779dc9a6"} Jan 21 00:47:04 crc kubenswrapper[5118]: I0121 00:47:04.214538 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" event={"ID":"44eb9bc7-60a3-421c-bf5e-d1d9a5026435","Type":"ContainerStarted","Data":"1169db6a3b25445bfb02507cef724d6fd0c2d9da9b3393a9bf487e98f8ffe67f"} Jan 21 00:47:04 crc kubenswrapper[5118]: I0121 00:47:04.214584 5118 scope.go:117] "RemoveContainer" containerID="e10204400bac64eb3427c96cbd1d21093a8afa9ce104aae1cdbf584140828f57" Jan 21 00:48:00 crc kubenswrapper[5118]: I0121 00:48:00.149998 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29482608-6rbcr"] Jan 21 00:48:00 crc kubenswrapper[5118]: I0121 00:48:00.152366 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2cafbb17-ce21-411f-bab3-cc0bb0fdbf61" containerName="oc" Jan 21 00:48:00 crc kubenswrapper[5118]: I0121 00:48:00.152394 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cafbb17-ce21-411f-bab3-cc0bb0fdbf61" containerName="oc" Jan 21 00:48:00 crc kubenswrapper[5118]: I0121 00:48:00.152615 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="2cafbb17-ce21-411f-bab3-cc0bb0fdbf61" containerName="oc" Jan 21 00:48:00 crc kubenswrapper[5118]: I0121 00:48:00.164181 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29482608-6rbcr"] Jan 21 00:48:00 crc kubenswrapper[5118]: I0121 00:48:00.164313 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482608-6rbcr" Jan 21 00:48:00 crc kubenswrapper[5118]: I0121 00:48:00.168189 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-tk7qc\"" Jan 21 00:48:00 crc kubenswrapper[5118]: I0121 00:48:00.168217 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 00:48:00 crc kubenswrapper[5118]: I0121 00:48:00.168234 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 00:48:00 crc kubenswrapper[5118]: I0121 00:48:00.329513 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tw8qr\" (UniqueName: \"kubernetes.io/projected/6831a76a-6c97-495d-8f1b-51173c10abbb-kube-api-access-tw8qr\") pod \"auto-csr-approver-29482608-6rbcr\" (UID: \"6831a76a-6c97-495d-8f1b-51173c10abbb\") " pod="openshift-infra/auto-csr-approver-29482608-6rbcr" Jan 21 00:48:00 crc kubenswrapper[5118]: I0121 00:48:00.431830 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tw8qr\" (UniqueName: \"kubernetes.io/projected/6831a76a-6c97-495d-8f1b-51173c10abbb-kube-api-access-tw8qr\") pod \"auto-csr-approver-29482608-6rbcr\" (UID: \"6831a76a-6c97-495d-8f1b-51173c10abbb\") " pod="openshift-infra/auto-csr-approver-29482608-6rbcr" Jan 21 00:48:00 crc kubenswrapper[5118]: I0121 00:48:00.474656 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tw8qr\" (UniqueName: \"kubernetes.io/projected/6831a76a-6c97-495d-8f1b-51173c10abbb-kube-api-access-tw8qr\") pod \"auto-csr-approver-29482608-6rbcr\" (UID: \"6831a76a-6c97-495d-8f1b-51173c10abbb\") " pod="openshift-infra/auto-csr-approver-29482608-6rbcr" Jan 21 00:48:00 crc kubenswrapper[5118]: I0121 00:48:00.484154 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482608-6rbcr" Jan 21 00:48:00 crc kubenswrapper[5118]: I0121 00:48:00.958861 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29482608-6rbcr"] Jan 21 00:48:01 crc kubenswrapper[5118]: I0121 00:48:01.808123 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29482608-6rbcr" event={"ID":"6831a76a-6c97-495d-8f1b-51173c10abbb","Type":"ContainerStarted","Data":"8718538a9a0c039e6081ef7084b07edab3e91b7d0e186ce60b3b6422898b59b7"} Jan 21 00:48:02 crc kubenswrapper[5118]: I0121 00:48:02.820061 5118 generic.go:358] "Generic (PLEG): container finished" podID="6831a76a-6c97-495d-8f1b-51173c10abbb" containerID="40c738d9ced3497ee9e8838696a6b396f879c93064383f1745230e6801c7585c" exitCode=0 Jan 21 00:48:02 crc kubenswrapper[5118]: I0121 00:48:02.820211 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29482608-6rbcr" event={"ID":"6831a76a-6c97-495d-8f1b-51173c10abbb","Type":"ContainerDied","Data":"40c738d9ced3497ee9e8838696a6b396f879c93064383f1745230e6801c7585c"} Jan 21 00:48:04 crc kubenswrapper[5118]: I0121 00:48:04.142176 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482608-6rbcr" Jan 21 00:48:04 crc kubenswrapper[5118]: I0121 00:48:04.190805 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tw8qr\" (UniqueName: \"kubernetes.io/projected/6831a76a-6c97-495d-8f1b-51173c10abbb-kube-api-access-tw8qr\") pod \"6831a76a-6c97-495d-8f1b-51173c10abbb\" (UID: \"6831a76a-6c97-495d-8f1b-51173c10abbb\") " Jan 21 00:48:04 crc kubenswrapper[5118]: I0121 00:48:04.202138 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6831a76a-6c97-495d-8f1b-51173c10abbb-kube-api-access-tw8qr" (OuterVolumeSpecName: "kube-api-access-tw8qr") pod "6831a76a-6c97-495d-8f1b-51173c10abbb" (UID: "6831a76a-6c97-495d-8f1b-51173c10abbb"). InnerVolumeSpecName "kube-api-access-tw8qr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:48:04 crc kubenswrapper[5118]: I0121 00:48:04.294823 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tw8qr\" (UniqueName: \"kubernetes.io/projected/6831a76a-6c97-495d-8f1b-51173c10abbb-kube-api-access-tw8qr\") on node \"crc\" DevicePath \"\"" Jan 21 00:48:04 crc kubenswrapper[5118]: I0121 00:48:04.837648 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482608-6rbcr" Jan 21 00:48:04 crc kubenswrapper[5118]: I0121 00:48:04.837637 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29482608-6rbcr" event={"ID":"6831a76a-6c97-495d-8f1b-51173c10abbb","Type":"ContainerDied","Data":"8718538a9a0c039e6081ef7084b07edab3e91b7d0e186ce60b3b6422898b59b7"} Jan 21 00:48:04 crc kubenswrapper[5118]: I0121 00:48:04.837785 5118 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8718538a9a0c039e6081ef7084b07edab3e91b7d0e186ce60b3b6422898b59b7" Jan 21 00:48:05 crc kubenswrapper[5118]: I0121 00:48:05.225222 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29482602-zwgtg"] Jan 21 00:48:05 crc kubenswrapper[5118]: I0121 00:48:05.229627 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29482602-zwgtg"] Jan 21 00:48:06 crc kubenswrapper[5118]: I0121 00:48:06.998569 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="382c0b4c-851c-4cc6-aaa8-554ac4ebe397" path="/var/lib/kubelet/pods/382c0b4c-851c-4cc6-aaa8-554ac4ebe397/volumes" Jan 21 00:48:08 crc kubenswrapper[5118]: I0121 00:48:08.278328 5118 scope.go:117] "RemoveContainer" containerID="47f56e57b8541bfa0a392765ee78f1f5a0ece372aa3e8599a6ddb6d98a17d21b" Jan 21 00:48:38 crc kubenswrapper[5118]: I0121 00:48:38.249525 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9nxdh"] Jan 21 00:48:38 crc kubenswrapper[5118]: I0121 00:48:38.251475 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6831a76a-6c97-495d-8f1b-51173c10abbb" containerName="oc" Jan 21 00:48:38 crc kubenswrapper[5118]: I0121 00:48:38.251500 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="6831a76a-6c97-495d-8f1b-51173c10abbb" containerName="oc" Jan 21 00:48:38 crc kubenswrapper[5118]: I0121 00:48:38.251785 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="6831a76a-6c97-495d-8f1b-51173c10abbb" containerName="oc" Jan 21 00:48:38 crc kubenswrapper[5118]: I0121 00:48:38.265948 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9nxdh" Jan 21 00:48:38 crc kubenswrapper[5118]: I0121 00:48:38.273952 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9nxdh"] Jan 21 00:48:38 crc kubenswrapper[5118]: I0121 00:48:38.286638 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xl8cw\" (UniqueName: \"kubernetes.io/projected/8f664598-3b47-4834-a947-b3c5d0db80b8-kube-api-access-xl8cw\") pod \"certified-operators-9nxdh\" (UID: \"8f664598-3b47-4834-a947-b3c5d0db80b8\") " pod="openshift-marketplace/certified-operators-9nxdh" Jan 21 00:48:38 crc kubenswrapper[5118]: I0121 00:48:38.286875 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f664598-3b47-4834-a947-b3c5d0db80b8-utilities\") pod \"certified-operators-9nxdh\" (UID: \"8f664598-3b47-4834-a947-b3c5d0db80b8\") " pod="openshift-marketplace/certified-operators-9nxdh" Jan 21 00:48:38 crc kubenswrapper[5118]: I0121 00:48:38.286998 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f664598-3b47-4834-a947-b3c5d0db80b8-catalog-content\") pod \"certified-operators-9nxdh\" (UID: \"8f664598-3b47-4834-a947-b3c5d0db80b8\") " pod="openshift-marketplace/certified-operators-9nxdh" Jan 21 00:48:38 crc kubenswrapper[5118]: I0121 00:48:38.388378 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f664598-3b47-4834-a947-b3c5d0db80b8-catalog-content\") pod \"certified-operators-9nxdh\" (UID: \"8f664598-3b47-4834-a947-b3c5d0db80b8\") " pod="openshift-marketplace/certified-operators-9nxdh" Jan 21 00:48:38 crc kubenswrapper[5118]: I0121 00:48:38.388441 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xl8cw\" (UniqueName: \"kubernetes.io/projected/8f664598-3b47-4834-a947-b3c5d0db80b8-kube-api-access-xl8cw\") pod \"certified-operators-9nxdh\" (UID: \"8f664598-3b47-4834-a947-b3c5d0db80b8\") " pod="openshift-marketplace/certified-operators-9nxdh" Jan 21 00:48:38 crc kubenswrapper[5118]: I0121 00:48:38.388505 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f664598-3b47-4834-a947-b3c5d0db80b8-utilities\") pod \"certified-operators-9nxdh\" (UID: \"8f664598-3b47-4834-a947-b3c5d0db80b8\") " pod="openshift-marketplace/certified-operators-9nxdh" Jan 21 00:48:38 crc kubenswrapper[5118]: I0121 00:48:38.388946 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f664598-3b47-4834-a947-b3c5d0db80b8-utilities\") pod \"certified-operators-9nxdh\" (UID: \"8f664598-3b47-4834-a947-b3c5d0db80b8\") " pod="openshift-marketplace/certified-operators-9nxdh" Jan 21 00:48:38 crc kubenswrapper[5118]: I0121 00:48:38.389113 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f664598-3b47-4834-a947-b3c5d0db80b8-catalog-content\") pod \"certified-operators-9nxdh\" (UID: \"8f664598-3b47-4834-a947-b3c5d0db80b8\") " pod="openshift-marketplace/certified-operators-9nxdh" Jan 21 00:48:38 crc kubenswrapper[5118]: I0121 00:48:38.414417 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xl8cw\" (UniqueName: \"kubernetes.io/projected/8f664598-3b47-4834-a947-b3c5d0db80b8-kube-api-access-xl8cw\") pod \"certified-operators-9nxdh\" (UID: \"8f664598-3b47-4834-a947-b3c5d0db80b8\") " pod="openshift-marketplace/certified-operators-9nxdh" Jan 21 00:48:38 crc kubenswrapper[5118]: I0121 00:48:38.594487 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9nxdh" Jan 21 00:48:38 crc kubenswrapper[5118]: I0121 00:48:38.797196 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9nxdh"] Jan 21 00:48:39 crc kubenswrapper[5118]: I0121 00:48:39.240423 5118 generic.go:358] "Generic (PLEG): container finished" podID="8f664598-3b47-4834-a947-b3c5d0db80b8" containerID="0126efa29f4c15b5c8376cba6f1dd8459854a3d4bdd9bb54f2fc18b970a5c037" exitCode=0 Jan 21 00:48:39 crc kubenswrapper[5118]: I0121 00:48:39.240491 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9nxdh" event={"ID":"8f664598-3b47-4834-a947-b3c5d0db80b8","Type":"ContainerDied","Data":"0126efa29f4c15b5c8376cba6f1dd8459854a3d4bdd9bb54f2fc18b970a5c037"} Jan 21 00:48:39 crc kubenswrapper[5118]: I0121 00:48:39.240996 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9nxdh" event={"ID":"8f664598-3b47-4834-a947-b3c5d0db80b8","Type":"ContainerStarted","Data":"192c24399962ea6d1636726d211e05d326206bd340685767c739606a3cbfe071"} Jan 21 00:48:40 crc kubenswrapper[5118]: I0121 00:48:40.250251 5118 generic.go:358] "Generic (PLEG): container finished" podID="8f664598-3b47-4834-a947-b3c5d0db80b8" containerID="b2627aeabc78993ab17bd8ec353e59edc40d511deddc943b96f4d186e0fd791f" exitCode=0 Jan 21 00:48:40 crc kubenswrapper[5118]: I0121 00:48:40.250434 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9nxdh" event={"ID":"8f664598-3b47-4834-a947-b3c5d0db80b8","Type":"ContainerDied","Data":"b2627aeabc78993ab17bd8ec353e59edc40d511deddc943b96f4d186e0fd791f"} Jan 21 00:48:41 crc kubenswrapper[5118]: I0121 00:48:41.260192 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9nxdh" event={"ID":"8f664598-3b47-4834-a947-b3c5d0db80b8","Type":"ContainerStarted","Data":"4bd0e2490dfd3730c699839516836c9574965d50df09ea877599963dc0e69828"} Jan 21 00:48:41 crc kubenswrapper[5118]: I0121 00:48:41.280103 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9nxdh" podStartSLOduration=2.693232685 podStartE2EDuration="3.280081028s" podCreationTimestamp="2026-01-21 00:48:38 +0000 UTC" firstStartedPulling="2026-01-21 00:48:39.242415835 +0000 UTC m=+2374.566662883" lastFinishedPulling="2026-01-21 00:48:39.829264208 +0000 UTC m=+2375.153511226" observedRunningTime="2026-01-21 00:48:41.278689241 +0000 UTC m=+2376.602936269" watchObservedRunningTime="2026-01-21 00:48:41.280081028 +0000 UTC m=+2376.604328056" Jan 21 00:48:48 crc kubenswrapper[5118]: I0121 00:48:48.595217 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-9nxdh" Jan 21 00:48:48 crc kubenswrapper[5118]: I0121 00:48:48.597337 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9nxdh" Jan 21 00:48:48 crc kubenswrapper[5118]: I0121 00:48:48.670581 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9nxdh" Jan 21 00:48:49 crc kubenswrapper[5118]: I0121 00:48:49.388866 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9nxdh" Jan 21 00:48:53 crc kubenswrapper[5118]: I0121 00:48:53.026330 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9nxdh"] Jan 21 00:48:53 crc kubenswrapper[5118]: I0121 00:48:53.026989 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9nxdh" podUID="8f664598-3b47-4834-a947-b3c5d0db80b8" containerName="registry-server" containerID="cri-o://4bd0e2490dfd3730c699839516836c9574965d50df09ea877599963dc0e69828" gracePeriod=2 Jan 21 00:48:53 crc kubenswrapper[5118]: I0121 00:48:53.370362 5118 generic.go:358] "Generic (PLEG): container finished" podID="8f664598-3b47-4834-a947-b3c5d0db80b8" containerID="4bd0e2490dfd3730c699839516836c9574965d50df09ea877599963dc0e69828" exitCode=0 Jan 21 00:48:53 crc kubenswrapper[5118]: I0121 00:48:53.370874 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9nxdh" event={"ID":"8f664598-3b47-4834-a947-b3c5d0db80b8","Type":"ContainerDied","Data":"4bd0e2490dfd3730c699839516836c9574965d50df09ea877599963dc0e69828"} Jan 21 00:48:53 crc kubenswrapper[5118]: I0121 00:48:53.451317 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9nxdh" Jan 21 00:48:53 crc kubenswrapper[5118]: I0121 00:48:53.469399 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f664598-3b47-4834-a947-b3c5d0db80b8-utilities\") pod \"8f664598-3b47-4834-a947-b3c5d0db80b8\" (UID: \"8f664598-3b47-4834-a947-b3c5d0db80b8\") " Jan 21 00:48:53 crc kubenswrapper[5118]: I0121 00:48:53.469489 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f664598-3b47-4834-a947-b3c5d0db80b8-catalog-content\") pod \"8f664598-3b47-4834-a947-b3c5d0db80b8\" (UID: \"8f664598-3b47-4834-a947-b3c5d0db80b8\") " Jan 21 00:48:53 crc kubenswrapper[5118]: I0121 00:48:53.469980 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xl8cw\" (UniqueName: \"kubernetes.io/projected/8f664598-3b47-4834-a947-b3c5d0db80b8-kube-api-access-xl8cw\") pod \"8f664598-3b47-4834-a947-b3c5d0db80b8\" (UID: \"8f664598-3b47-4834-a947-b3c5d0db80b8\") " Jan 21 00:48:53 crc kubenswrapper[5118]: I0121 00:48:53.478510 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f664598-3b47-4834-a947-b3c5d0db80b8-utilities" (OuterVolumeSpecName: "utilities") pod "8f664598-3b47-4834-a947-b3c5d0db80b8" (UID: "8f664598-3b47-4834-a947-b3c5d0db80b8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 00:48:53 crc kubenswrapper[5118]: I0121 00:48:53.478651 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f664598-3b47-4834-a947-b3c5d0db80b8-kube-api-access-xl8cw" (OuterVolumeSpecName: "kube-api-access-xl8cw") pod "8f664598-3b47-4834-a947-b3c5d0db80b8" (UID: "8f664598-3b47-4834-a947-b3c5d0db80b8"). InnerVolumeSpecName "kube-api-access-xl8cw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:48:53 crc kubenswrapper[5118]: I0121 00:48:53.532414 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f664598-3b47-4834-a947-b3c5d0db80b8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8f664598-3b47-4834-a947-b3c5d0db80b8" (UID: "8f664598-3b47-4834-a947-b3c5d0db80b8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 00:48:53 crc kubenswrapper[5118]: I0121 00:48:53.572369 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xl8cw\" (UniqueName: \"kubernetes.io/projected/8f664598-3b47-4834-a947-b3c5d0db80b8-kube-api-access-xl8cw\") on node \"crc\" DevicePath \"\"" Jan 21 00:48:53 crc kubenswrapper[5118]: I0121 00:48:53.572432 5118 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f664598-3b47-4834-a947-b3c5d0db80b8-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 00:48:53 crc kubenswrapper[5118]: I0121 00:48:53.572462 5118 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f664598-3b47-4834-a947-b3c5d0db80b8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 00:48:54 crc kubenswrapper[5118]: I0121 00:48:54.386381 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9nxdh" event={"ID":"8f664598-3b47-4834-a947-b3c5d0db80b8","Type":"ContainerDied","Data":"192c24399962ea6d1636726d211e05d326206bd340685767c739606a3cbfe071"} Jan 21 00:48:54 crc kubenswrapper[5118]: I0121 00:48:54.386458 5118 scope.go:117] "RemoveContainer" containerID="4bd0e2490dfd3730c699839516836c9574965d50df09ea877599963dc0e69828" Jan 21 00:48:54 crc kubenswrapper[5118]: I0121 00:48:54.386669 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9nxdh" Jan 21 00:48:54 crc kubenswrapper[5118]: I0121 00:48:54.426511 5118 scope.go:117] "RemoveContainer" containerID="b2627aeabc78993ab17bd8ec353e59edc40d511deddc943b96f4d186e0fd791f" Jan 21 00:48:54 crc kubenswrapper[5118]: I0121 00:48:54.454868 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9nxdh"] Jan 21 00:48:54 crc kubenswrapper[5118]: I0121 00:48:54.465729 5118 scope.go:117] "RemoveContainer" containerID="0126efa29f4c15b5c8376cba6f1dd8459854a3d4bdd9bb54f2fc18b970a5c037" Jan 21 00:48:54 crc kubenswrapper[5118]: I0121 00:48:54.469525 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9nxdh"] Jan 21 00:48:54 crc kubenswrapper[5118]: I0121 00:48:54.993027 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f664598-3b47-4834-a947-b3c5d0db80b8" path="/var/lib/kubelet/pods/8f664598-3b47-4834-a947-b3c5d0db80b8/volumes" Jan 21 00:49:02 crc kubenswrapper[5118]: I0121 00:49:02.438837 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6dzp9"] Jan 21 00:49:02 crc kubenswrapper[5118]: I0121 00:49:02.441012 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8f664598-3b47-4834-a947-b3c5d0db80b8" containerName="extract-content" Jan 21 00:49:02 crc kubenswrapper[5118]: I0121 00:49:02.441043 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f664598-3b47-4834-a947-b3c5d0db80b8" containerName="extract-content" Jan 21 00:49:02 crc kubenswrapper[5118]: I0121 00:49:02.441133 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8f664598-3b47-4834-a947-b3c5d0db80b8" containerName="registry-server" Jan 21 00:49:02 crc kubenswrapper[5118]: I0121 00:49:02.441147 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f664598-3b47-4834-a947-b3c5d0db80b8" containerName="registry-server" Jan 21 00:49:02 crc kubenswrapper[5118]: I0121 00:49:02.441191 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8f664598-3b47-4834-a947-b3c5d0db80b8" containerName="extract-utilities" Jan 21 00:49:02 crc kubenswrapper[5118]: I0121 00:49:02.441203 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f664598-3b47-4834-a947-b3c5d0db80b8" containerName="extract-utilities" Jan 21 00:49:02 crc kubenswrapper[5118]: I0121 00:49:02.441608 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="8f664598-3b47-4834-a947-b3c5d0db80b8" containerName="registry-server" Jan 21 00:49:02 crc kubenswrapper[5118]: I0121 00:49:02.447639 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6dzp9" Jan 21 00:49:02 crc kubenswrapper[5118]: I0121 00:49:02.454135 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6dzp9"] Jan 21 00:49:02 crc kubenswrapper[5118]: I0121 00:49:02.533280 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6q2k4\" (UniqueName: \"kubernetes.io/projected/91f6e534-bc0c-4fb2-91f9-0844e1ca8d27-kube-api-access-6q2k4\") pod \"community-operators-6dzp9\" (UID: \"91f6e534-bc0c-4fb2-91f9-0844e1ca8d27\") " pod="openshift-marketplace/community-operators-6dzp9" Jan 21 00:49:02 crc kubenswrapper[5118]: I0121 00:49:02.533364 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91f6e534-bc0c-4fb2-91f9-0844e1ca8d27-catalog-content\") pod \"community-operators-6dzp9\" (UID: \"91f6e534-bc0c-4fb2-91f9-0844e1ca8d27\") " pod="openshift-marketplace/community-operators-6dzp9" Jan 21 00:49:02 crc kubenswrapper[5118]: I0121 00:49:02.533438 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91f6e534-bc0c-4fb2-91f9-0844e1ca8d27-utilities\") pod \"community-operators-6dzp9\" (UID: \"91f6e534-bc0c-4fb2-91f9-0844e1ca8d27\") " pod="openshift-marketplace/community-operators-6dzp9" Jan 21 00:49:02 crc kubenswrapper[5118]: I0121 00:49:02.635387 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91f6e534-bc0c-4fb2-91f9-0844e1ca8d27-catalog-content\") pod \"community-operators-6dzp9\" (UID: \"91f6e534-bc0c-4fb2-91f9-0844e1ca8d27\") " pod="openshift-marketplace/community-operators-6dzp9" Jan 21 00:49:02 crc kubenswrapper[5118]: I0121 00:49:02.635494 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91f6e534-bc0c-4fb2-91f9-0844e1ca8d27-utilities\") pod \"community-operators-6dzp9\" (UID: \"91f6e534-bc0c-4fb2-91f9-0844e1ca8d27\") " pod="openshift-marketplace/community-operators-6dzp9" Jan 21 00:49:02 crc kubenswrapper[5118]: I0121 00:49:02.635586 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6q2k4\" (UniqueName: \"kubernetes.io/projected/91f6e534-bc0c-4fb2-91f9-0844e1ca8d27-kube-api-access-6q2k4\") pod \"community-operators-6dzp9\" (UID: \"91f6e534-bc0c-4fb2-91f9-0844e1ca8d27\") " pod="openshift-marketplace/community-operators-6dzp9" Jan 21 00:49:02 crc kubenswrapper[5118]: I0121 00:49:02.636089 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91f6e534-bc0c-4fb2-91f9-0844e1ca8d27-catalog-content\") pod \"community-operators-6dzp9\" (UID: \"91f6e534-bc0c-4fb2-91f9-0844e1ca8d27\") " pod="openshift-marketplace/community-operators-6dzp9" Jan 21 00:49:02 crc kubenswrapper[5118]: I0121 00:49:02.636330 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91f6e534-bc0c-4fb2-91f9-0844e1ca8d27-utilities\") pod \"community-operators-6dzp9\" (UID: \"91f6e534-bc0c-4fb2-91f9-0844e1ca8d27\") " pod="openshift-marketplace/community-operators-6dzp9" Jan 21 00:49:02 crc kubenswrapper[5118]: I0121 00:49:02.670397 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6q2k4\" (UniqueName: \"kubernetes.io/projected/91f6e534-bc0c-4fb2-91f9-0844e1ca8d27-kube-api-access-6q2k4\") pod \"community-operators-6dzp9\" (UID: \"91f6e534-bc0c-4fb2-91f9-0844e1ca8d27\") " pod="openshift-marketplace/community-operators-6dzp9" Jan 21 00:49:02 crc kubenswrapper[5118]: I0121 00:49:02.817063 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6dzp9" Jan 21 00:49:03 crc kubenswrapper[5118]: I0121 00:49:03.323321 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6dzp9"] Jan 21 00:49:03 crc kubenswrapper[5118]: I0121 00:49:03.469822 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6dzp9" event={"ID":"91f6e534-bc0c-4fb2-91f9-0844e1ca8d27","Type":"ContainerStarted","Data":"9f2ca757d0d39f61328ce04ca2e62f2649d381067d048c1489fb2ca21dec161f"} Jan 21 00:49:04 crc kubenswrapper[5118]: I0121 00:49:04.489636 5118 generic.go:358] "Generic (PLEG): container finished" podID="91f6e534-bc0c-4fb2-91f9-0844e1ca8d27" containerID="ad617f085969d54949766aa4c307e365fb3c19056a533ddf719b8f058e77205a" exitCode=0 Jan 21 00:49:04 crc kubenswrapper[5118]: I0121 00:49:04.489738 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6dzp9" event={"ID":"91f6e534-bc0c-4fb2-91f9-0844e1ca8d27","Type":"ContainerDied","Data":"ad617f085969d54949766aa4c307e365fb3c19056a533ddf719b8f058e77205a"} Jan 21 00:49:05 crc kubenswrapper[5118]: I0121 00:49:05.499247 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6dzp9" event={"ID":"91f6e534-bc0c-4fb2-91f9-0844e1ca8d27","Type":"ContainerStarted","Data":"fe206dbbc3b482cb241bd27af4384b695a0563992c95079ce7b13af1ed3030bd"} Jan 21 00:49:05 crc kubenswrapper[5118]: I0121 00:49:05.896731 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qcqwq_7c0390f5-26b4-4299-958c-acac058be619/kube-multus/0.log" Jan 21 00:49:05 crc kubenswrapper[5118]: I0121 00:49:05.901431 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qcqwq_7c0390f5-26b4-4299-958c-acac058be619/kube-multus/0.log" Jan 21 00:49:05 crc kubenswrapper[5118]: I0121 00:49:05.906227 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 21 00:49:05 crc kubenswrapper[5118]: I0121 00:49:05.908615 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 21 00:49:06 crc kubenswrapper[5118]: I0121 00:49:06.514241 5118 generic.go:358] "Generic (PLEG): container finished" podID="91f6e534-bc0c-4fb2-91f9-0844e1ca8d27" containerID="fe206dbbc3b482cb241bd27af4384b695a0563992c95079ce7b13af1ed3030bd" exitCode=0 Jan 21 00:49:06 crc kubenswrapper[5118]: I0121 00:49:06.514463 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6dzp9" event={"ID":"91f6e534-bc0c-4fb2-91f9-0844e1ca8d27","Type":"ContainerDied","Data":"fe206dbbc3b482cb241bd27af4384b695a0563992c95079ce7b13af1ed3030bd"} Jan 21 00:49:07 crc kubenswrapper[5118]: I0121 00:49:07.530975 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6dzp9" event={"ID":"91f6e534-bc0c-4fb2-91f9-0844e1ca8d27","Type":"ContainerStarted","Data":"934b08f5a70184da0af9a4ff73c46a5a027e5ddb099cbf22f92c2e733c81de6f"} Jan 21 00:49:07 crc kubenswrapper[5118]: I0121 00:49:07.571652 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6dzp9" podStartSLOduration=4.821384719 podStartE2EDuration="5.571623558s" podCreationTimestamp="2026-01-21 00:49:02 +0000 UTC" firstStartedPulling="2026-01-21 00:49:04.490984708 +0000 UTC m=+2399.815231756" lastFinishedPulling="2026-01-21 00:49:05.241223577 +0000 UTC m=+2400.565470595" observedRunningTime="2026-01-21 00:49:07.566693657 +0000 UTC m=+2402.890940765" watchObservedRunningTime="2026-01-21 00:49:07.571623558 +0000 UTC m=+2402.895870586" Jan 21 00:49:12 crc kubenswrapper[5118]: I0121 00:49:12.817939 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-6dzp9" Jan 21 00:49:12 crc kubenswrapper[5118]: I0121 00:49:12.818786 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6dzp9" Jan 21 00:49:12 crc kubenswrapper[5118]: I0121 00:49:12.881432 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6dzp9" Jan 21 00:49:13 crc kubenswrapper[5118]: I0121 00:49:13.642870 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6dzp9" Jan 21 00:49:13 crc kubenswrapper[5118]: I0121 00:49:13.698506 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6dzp9"] Jan 21 00:49:15 crc kubenswrapper[5118]: I0121 00:49:15.602919 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6dzp9" podUID="91f6e534-bc0c-4fb2-91f9-0844e1ca8d27" containerName="registry-server" containerID="cri-o://934b08f5a70184da0af9a4ff73c46a5a027e5ddb099cbf22f92c2e733c81de6f" gracePeriod=2 Jan 21 00:49:16 crc kubenswrapper[5118]: E0121 00:49:16.006497 5118 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod91f6e534_bc0c_4fb2_91f9_0844e1ca8d27.slice/crio-conmon-934b08f5a70184da0af9a4ff73c46a5a027e5ddb099cbf22f92c2e733c81de6f.scope\": RecentStats: unable to find data in memory cache]" Jan 21 00:49:16 crc kubenswrapper[5118]: I0121 00:49:16.616416 5118 generic.go:358] "Generic (PLEG): container finished" podID="91f6e534-bc0c-4fb2-91f9-0844e1ca8d27" containerID="934b08f5a70184da0af9a4ff73c46a5a027e5ddb099cbf22f92c2e733c81de6f" exitCode=0 Jan 21 00:49:16 crc kubenswrapper[5118]: I0121 00:49:16.616493 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6dzp9" event={"ID":"91f6e534-bc0c-4fb2-91f9-0844e1ca8d27","Type":"ContainerDied","Data":"934b08f5a70184da0af9a4ff73c46a5a027e5ddb099cbf22f92c2e733c81de6f"} Jan 21 00:49:16 crc kubenswrapper[5118]: I0121 00:49:16.780025 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6dzp9" Jan 21 00:49:16 crc kubenswrapper[5118]: I0121 00:49:16.918310 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91f6e534-bc0c-4fb2-91f9-0844e1ca8d27-utilities\") pod \"91f6e534-bc0c-4fb2-91f9-0844e1ca8d27\" (UID: \"91f6e534-bc0c-4fb2-91f9-0844e1ca8d27\") " Jan 21 00:49:16 crc kubenswrapper[5118]: I0121 00:49:16.918391 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6q2k4\" (UniqueName: \"kubernetes.io/projected/91f6e534-bc0c-4fb2-91f9-0844e1ca8d27-kube-api-access-6q2k4\") pod \"91f6e534-bc0c-4fb2-91f9-0844e1ca8d27\" (UID: \"91f6e534-bc0c-4fb2-91f9-0844e1ca8d27\") " Jan 21 00:49:16 crc kubenswrapper[5118]: I0121 00:49:16.918461 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91f6e534-bc0c-4fb2-91f9-0844e1ca8d27-catalog-content\") pod \"91f6e534-bc0c-4fb2-91f9-0844e1ca8d27\" (UID: \"91f6e534-bc0c-4fb2-91f9-0844e1ca8d27\") " Jan 21 00:49:16 crc kubenswrapper[5118]: I0121 00:49:16.922126 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91f6e534-bc0c-4fb2-91f9-0844e1ca8d27-utilities" (OuterVolumeSpecName: "utilities") pod "91f6e534-bc0c-4fb2-91f9-0844e1ca8d27" (UID: "91f6e534-bc0c-4fb2-91f9-0844e1ca8d27"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 00:49:16 crc kubenswrapper[5118]: I0121 00:49:16.924883 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91f6e534-bc0c-4fb2-91f9-0844e1ca8d27-kube-api-access-6q2k4" (OuterVolumeSpecName: "kube-api-access-6q2k4") pod "91f6e534-bc0c-4fb2-91f9-0844e1ca8d27" (UID: "91f6e534-bc0c-4fb2-91f9-0844e1ca8d27"). InnerVolumeSpecName "kube-api-access-6q2k4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:49:16 crc kubenswrapper[5118]: I0121 00:49:16.972955 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91f6e534-bc0c-4fb2-91f9-0844e1ca8d27-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "91f6e534-bc0c-4fb2-91f9-0844e1ca8d27" (UID: "91f6e534-bc0c-4fb2-91f9-0844e1ca8d27"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 00:49:17 crc kubenswrapper[5118]: I0121 00:49:17.019673 5118 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91f6e534-bc0c-4fb2-91f9-0844e1ca8d27-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 00:49:17 crc kubenswrapper[5118]: I0121 00:49:17.019704 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6q2k4\" (UniqueName: \"kubernetes.io/projected/91f6e534-bc0c-4fb2-91f9-0844e1ca8d27-kube-api-access-6q2k4\") on node \"crc\" DevicePath \"\"" Jan 21 00:49:17 crc kubenswrapper[5118]: I0121 00:49:17.019714 5118 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91f6e534-bc0c-4fb2-91f9-0844e1ca8d27-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 00:49:17 crc kubenswrapper[5118]: I0121 00:49:17.630969 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6dzp9" event={"ID":"91f6e534-bc0c-4fb2-91f9-0844e1ca8d27","Type":"ContainerDied","Data":"9f2ca757d0d39f61328ce04ca2e62f2649d381067d048c1489fb2ca21dec161f"} Jan 21 00:49:17 crc kubenswrapper[5118]: I0121 00:49:17.631029 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6dzp9" Jan 21 00:49:17 crc kubenswrapper[5118]: I0121 00:49:17.631056 5118 scope.go:117] "RemoveContainer" containerID="934b08f5a70184da0af9a4ff73c46a5a027e5ddb099cbf22f92c2e733c81de6f" Jan 21 00:49:17 crc kubenswrapper[5118]: I0121 00:49:17.685139 5118 scope.go:117] "RemoveContainer" containerID="fe206dbbc3b482cb241bd27af4384b695a0563992c95079ce7b13af1ed3030bd" Jan 21 00:49:17 crc kubenswrapper[5118]: I0121 00:49:17.697474 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6dzp9"] Jan 21 00:49:17 crc kubenswrapper[5118]: I0121 00:49:17.703641 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6dzp9"] Jan 21 00:49:17 crc kubenswrapper[5118]: I0121 00:49:17.711271 5118 scope.go:117] "RemoveContainer" containerID="ad617f085969d54949766aa4c307e365fb3c19056a533ddf719b8f058e77205a" Jan 21 00:49:18 crc kubenswrapper[5118]: I0121 00:49:18.986452 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91f6e534-bc0c-4fb2-91f9-0844e1ca8d27" path="/var/lib/kubelet/pods/91f6e534-bc0c-4fb2-91f9-0844e1ca8d27/volumes" Jan 21 00:49:33 crc kubenswrapper[5118]: I0121 00:49:33.800715 5118 patch_prober.go:28] interesting pod/machine-config-daemon-22r9n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 00:49:33 crc kubenswrapper[5118]: I0121 00:49:33.801099 5118 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 00:50:00 crc kubenswrapper[5118]: I0121 00:50:00.163400 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29482610-ltfcp"] Jan 21 00:50:00 crc kubenswrapper[5118]: I0121 00:50:00.165111 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="91f6e534-bc0c-4fb2-91f9-0844e1ca8d27" containerName="extract-content" Jan 21 00:50:00 crc kubenswrapper[5118]: I0121 00:50:00.165133 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="91f6e534-bc0c-4fb2-91f9-0844e1ca8d27" containerName="extract-content" Jan 21 00:50:00 crc kubenswrapper[5118]: I0121 00:50:00.165216 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="91f6e534-bc0c-4fb2-91f9-0844e1ca8d27" containerName="extract-utilities" Jan 21 00:50:00 crc kubenswrapper[5118]: I0121 00:50:00.165231 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="91f6e534-bc0c-4fb2-91f9-0844e1ca8d27" containerName="extract-utilities" Jan 21 00:50:00 crc kubenswrapper[5118]: I0121 00:50:00.165250 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="91f6e534-bc0c-4fb2-91f9-0844e1ca8d27" containerName="registry-server" Jan 21 00:50:00 crc kubenswrapper[5118]: I0121 00:50:00.165261 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="91f6e534-bc0c-4fb2-91f9-0844e1ca8d27" containerName="registry-server" Jan 21 00:50:00 crc kubenswrapper[5118]: I0121 00:50:00.165839 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="91f6e534-bc0c-4fb2-91f9-0844e1ca8d27" containerName="registry-server" Jan 21 00:50:00 crc kubenswrapper[5118]: I0121 00:50:00.183964 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29482610-ltfcp"] Jan 21 00:50:00 crc kubenswrapper[5118]: I0121 00:50:00.184155 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482610-ltfcp" Jan 21 00:50:00 crc kubenswrapper[5118]: I0121 00:50:00.186929 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-tk7qc\"" Jan 21 00:50:00 crc kubenswrapper[5118]: I0121 00:50:00.186861 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 00:50:00 crc kubenswrapper[5118]: I0121 00:50:00.187566 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 00:50:00 crc kubenswrapper[5118]: I0121 00:50:00.271562 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrj6p\" (UniqueName: \"kubernetes.io/projected/098036bf-94d5-47e6-819b-bb3012cb75a4-kube-api-access-vrj6p\") pod \"auto-csr-approver-29482610-ltfcp\" (UID: \"098036bf-94d5-47e6-819b-bb3012cb75a4\") " pod="openshift-infra/auto-csr-approver-29482610-ltfcp" Jan 21 00:50:00 crc kubenswrapper[5118]: I0121 00:50:00.373261 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vrj6p\" (UniqueName: \"kubernetes.io/projected/098036bf-94d5-47e6-819b-bb3012cb75a4-kube-api-access-vrj6p\") pod \"auto-csr-approver-29482610-ltfcp\" (UID: \"098036bf-94d5-47e6-819b-bb3012cb75a4\") " pod="openshift-infra/auto-csr-approver-29482610-ltfcp" Jan 21 00:50:00 crc kubenswrapper[5118]: I0121 00:50:00.410276 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrj6p\" (UniqueName: \"kubernetes.io/projected/098036bf-94d5-47e6-819b-bb3012cb75a4-kube-api-access-vrj6p\") pod \"auto-csr-approver-29482610-ltfcp\" (UID: \"098036bf-94d5-47e6-819b-bb3012cb75a4\") " pod="openshift-infra/auto-csr-approver-29482610-ltfcp" Jan 21 00:50:00 crc kubenswrapper[5118]: I0121 00:50:00.527299 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482610-ltfcp" Jan 21 00:50:01 crc kubenswrapper[5118]: I0121 00:50:01.016889 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29482610-ltfcp"] Jan 21 00:50:01 crc kubenswrapper[5118]: I0121 00:50:01.042018 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29482610-ltfcp" event={"ID":"098036bf-94d5-47e6-819b-bb3012cb75a4","Type":"ContainerStarted","Data":"5148ba7bf9aa537ca4e80a8b69053c022e4960284744484d4504284e619cf2d3"} Jan 21 00:50:03 crc kubenswrapper[5118]: I0121 00:50:03.065307 5118 generic.go:358] "Generic (PLEG): container finished" podID="098036bf-94d5-47e6-819b-bb3012cb75a4" containerID="5686c7ae663dee93f51f2d2db1dc3fd307c2f161b36ae2f7953186adfa6bbeb7" exitCode=0 Jan 21 00:50:03 crc kubenswrapper[5118]: I0121 00:50:03.065389 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29482610-ltfcp" event={"ID":"098036bf-94d5-47e6-819b-bb3012cb75a4","Type":"ContainerDied","Data":"5686c7ae663dee93f51f2d2db1dc3fd307c2f161b36ae2f7953186adfa6bbeb7"} Jan 21 00:50:03 crc kubenswrapper[5118]: I0121 00:50:03.801089 5118 patch_prober.go:28] interesting pod/machine-config-daemon-22r9n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 00:50:03 crc kubenswrapper[5118]: I0121 00:50:03.801560 5118 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 00:50:04 crc kubenswrapper[5118]: I0121 00:50:04.389128 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482610-ltfcp" Jan 21 00:50:04 crc kubenswrapper[5118]: I0121 00:50:04.444745 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vrj6p\" (UniqueName: \"kubernetes.io/projected/098036bf-94d5-47e6-819b-bb3012cb75a4-kube-api-access-vrj6p\") pod \"098036bf-94d5-47e6-819b-bb3012cb75a4\" (UID: \"098036bf-94d5-47e6-819b-bb3012cb75a4\") " Jan 21 00:50:04 crc kubenswrapper[5118]: I0121 00:50:04.451444 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/098036bf-94d5-47e6-819b-bb3012cb75a4-kube-api-access-vrj6p" (OuterVolumeSpecName: "kube-api-access-vrj6p") pod "098036bf-94d5-47e6-819b-bb3012cb75a4" (UID: "098036bf-94d5-47e6-819b-bb3012cb75a4"). InnerVolumeSpecName "kube-api-access-vrj6p". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:50:04 crc kubenswrapper[5118]: I0121 00:50:04.546946 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vrj6p\" (UniqueName: \"kubernetes.io/projected/098036bf-94d5-47e6-819b-bb3012cb75a4-kube-api-access-vrj6p\") on node \"crc\" DevicePath \"\"" Jan 21 00:50:05 crc kubenswrapper[5118]: I0121 00:50:05.081082 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482610-ltfcp" Jan 21 00:50:05 crc kubenswrapper[5118]: I0121 00:50:05.081082 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29482610-ltfcp" event={"ID":"098036bf-94d5-47e6-819b-bb3012cb75a4","Type":"ContainerDied","Data":"5148ba7bf9aa537ca4e80a8b69053c022e4960284744484d4504284e619cf2d3"} Jan 21 00:50:05 crc kubenswrapper[5118]: I0121 00:50:05.081610 5118 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5148ba7bf9aa537ca4e80a8b69053c022e4960284744484d4504284e619cf2d3" Jan 21 00:50:05 crc kubenswrapper[5118]: I0121 00:50:05.461654 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29482604-pml75"] Jan 21 00:50:05 crc kubenswrapper[5118]: I0121 00:50:05.467623 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29482604-pml75"] Jan 21 00:50:06 crc kubenswrapper[5118]: I0121 00:50:06.987408 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e1d66bc-063d-4211-a5d4-2e3f90aca915" path="/var/lib/kubelet/pods/9e1d66bc-063d-4211-a5d4-2e3f90aca915/volumes" Jan 21 00:50:08 crc kubenswrapper[5118]: I0121 00:50:08.419563 5118 scope.go:117] "RemoveContainer" containerID="522647b963882e5b2dfb3ebd12943b7b2a47b678e0ed1e0a08a1811a1eb67a5b" Jan 21 00:50:33 crc kubenswrapper[5118]: I0121 00:50:33.801499 5118 patch_prober.go:28] interesting pod/machine-config-daemon-22r9n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 00:50:33 crc kubenswrapper[5118]: I0121 00:50:33.802399 5118 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 00:50:33 crc kubenswrapper[5118]: I0121 00:50:33.802469 5118 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" Jan 21 00:50:33 crc kubenswrapper[5118]: I0121 00:50:33.803374 5118 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1169db6a3b25445bfb02507cef724d6fd0c2d9da9b3393a9bf487e98f8ffe67f"} pod="openshift-machine-config-operator/machine-config-daemon-22r9n" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 00:50:33 crc kubenswrapper[5118]: I0121 00:50:33.803439 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" containerName="machine-config-daemon" containerID="cri-o://1169db6a3b25445bfb02507cef724d6fd0c2d9da9b3393a9bf487e98f8ffe67f" gracePeriod=600 Jan 21 00:50:33 crc kubenswrapper[5118]: E0121 00:50:33.936675 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-22r9n_openshift-machine-config-operator(44eb9bc7-60a3-421c-bf5e-d1d9a5026435)\"" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" Jan 21 00:50:34 crc kubenswrapper[5118]: I0121 00:50:34.370598 5118 generic.go:358] "Generic (PLEG): container finished" podID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" containerID="1169db6a3b25445bfb02507cef724d6fd0c2d9da9b3393a9bf487e98f8ffe67f" exitCode=0 Jan 21 00:50:34 crc kubenswrapper[5118]: I0121 00:50:34.370672 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" event={"ID":"44eb9bc7-60a3-421c-bf5e-d1d9a5026435","Type":"ContainerDied","Data":"1169db6a3b25445bfb02507cef724d6fd0c2d9da9b3393a9bf487e98f8ffe67f"} Jan 21 00:50:34 crc kubenswrapper[5118]: I0121 00:50:34.370723 5118 scope.go:117] "RemoveContainer" containerID="ef4d29ff7544b6ec2a345aea13169974a9354ef86fcdc4898ed5dba5779dc9a6" Jan 21 00:50:34 crc kubenswrapper[5118]: I0121 00:50:34.385605 5118 scope.go:117] "RemoveContainer" containerID="1169db6a3b25445bfb02507cef724d6fd0c2d9da9b3393a9bf487e98f8ffe67f" Jan 21 00:50:34 crc kubenswrapper[5118]: E0121 00:50:34.386011 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-22r9n_openshift-machine-config-operator(44eb9bc7-60a3-421c-bf5e-d1d9a5026435)\"" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" Jan 21 00:50:47 crc kubenswrapper[5118]: I0121 00:50:47.975885 5118 scope.go:117] "RemoveContainer" containerID="1169db6a3b25445bfb02507cef724d6fd0c2d9da9b3393a9bf487e98f8ffe67f" Jan 21 00:50:47 crc kubenswrapper[5118]: E0121 00:50:47.976820 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-22r9n_openshift-machine-config-operator(44eb9bc7-60a3-421c-bf5e-d1d9a5026435)\"" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" Jan 21 00:50:58 crc kubenswrapper[5118]: I0121 00:50:58.999282 5118 scope.go:117] "RemoveContainer" containerID="1169db6a3b25445bfb02507cef724d6fd0c2d9da9b3393a9bf487e98f8ffe67f" Jan 21 00:50:59 crc kubenswrapper[5118]: E0121 00:50:59.000242 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-22r9n_openshift-machine-config-operator(44eb9bc7-60a3-421c-bf5e-d1d9a5026435)\"" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" Jan 21 00:51:13 crc kubenswrapper[5118]: I0121 00:51:13.975964 5118 scope.go:117] "RemoveContainer" containerID="1169db6a3b25445bfb02507cef724d6fd0c2d9da9b3393a9bf487e98f8ffe67f" Jan 21 00:51:13 crc kubenswrapper[5118]: E0121 00:51:13.976776 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-22r9n_openshift-machine-config-operator(44eb9bc7-60a3-421c-bf5e-d1d9a5026435)\"" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" Jan 21 00:51:28 crc kubenswrapper[5118]: I0121 00:51:28.990256 5118 scope.go:117] "RemoveContainer" containerID="1169db6a3b25445bfb02507cef724d6fd0c2d9da9b3393a9bf487e98f8ffe67f" Jan 21 00:51:28 crc kubenswrapper[5118]: E0121 00:51:28.991290 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-22r9n_openshift-machine-config-operator(44eb9bc7-60a3-421c-bf5e-d1d9a5026435)\"" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" Jan 21 00:51:40 crc kubenswrapper[5118]: I0121 00:51:40.975669 5118 scope.go:117] "RemoveContainer" containerID="1169db6a3b25445bfb02507cef724d6fd0c2d9da9b3393a9bf487e98f8ffe67f" Jan 21 00:51:40 crc kubenswrapper[5118]: E0121 00:51:40.976701 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-22r9n_openshift-machine-config-operator(44eb9bc7-60a3-421c-bf5e-d1d9a5026435)\"" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" Jan 21 00:51:55 crc kubenswrapper[5118]: I0121 00:51:55.975814 5118 scope.go:117] "RemoveContainer" containerID="1169db6a3b25445bfb02507cef724d6fd0c2d9da9b3393a9bf487e98f8ffe67f" Jan 21 00:51:55 crc kubenswrapper[5118]: E0121 00:51:55.977524 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-22r9n_openshift-machine-config-operator(44eb9bc7-60a3-421c-bf5e-d1d9a5026435)\"" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" Jan 21 00:52:00 crc kubenswrapper[5118]: I0121 00:52:00.140547 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29482612-srpvx"] Jan 21 00:52:00 crc kubenswrapper[5118]: I0121 00:52:00.141713 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="098036bf-94d5-47e6-819b-bb3012cb75a4" containerName="oc" Jan 21 00:52:00 crc kubenswrapper[5118]: I0121 00:52:00.141733 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="098036bf-94d5-47e6-819b-bb3012cb75a4" containerName="oc" Jan 21 00:52:00 crc kubenswrapper[5118]: I0121 00:52:00.141909 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="098036bf-94d5-47e6-819b-bb3012cb75a4" containerName="oc" Jan 21 00:52:00 crc kubenswrapper[5118]: I0121 00:52:00.151734 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29482612-srpvx"] Jan 21 00:52:00 crc kubenswrapper[5118]: I0121 00:52:00.151852 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482612-srpvx" Jan 21 00:52:00 crc kubenswrapper[5118]: I0121 00:52:00.154288 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 00:52:00 crc kubenswrapper[5118]: I0121 00:52:00.154288 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-tk7qc\"" Jan 21 00:52:00 crc kubenswrapper[5118]: I0121 00:52:00.154805 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 00:52:00 crc kubenswrapper[5118]: I0121 00:52:00.217712 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t56h7\" (UniqueName: \"kubernetes.io/projected/83c9d4ec-b4fa-4572-a111-bdc4e5afab4f-kube-api-access-t56h7\") pod \"auto-csr-approver-29482612-srpvx\" (UID: \"83c9d4ec-b4fa-4572-a111-bdc4e5afab4f\") " pod="openshift-infra/auto-csr-approver-29482612-srpvx" Jan 21 00:52:00 crc kubenswrapper[5118]: I0121 00:52:00.320000 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-t56h7\" (UniqueName: \"kubernetes.io/projected/83c9d4ec-b4fa-4572-a111-bdc4e5afab4f-kube-api-access-t56h7\") pod \"auto-csr-approver-29482612-srpvx\" (UID: \"83c9d4ec-b4fa-4572-a111-bdc4e5afab4f\") " pod="openshift-infra/auto-csr-approver-29482612-srpvx" Jan 21 00:52:00 crc kubenswrapper[5118]: I0121 00:52:00.348374 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-t56h7\" (UniqueName: \"kubernetes.io/projected/83c9d4ec-b4fa-4572-a111-bdc4e5afab4f-kube-api-access-t56h7\") pod \"auto-csr-approver-29482612-srpvx\" (UID: \"83c9d4ec-b4fa-4572-a111-bdc4e5afab4f\") " pod="openshift-infra/auto-csr-approver-29482612-srpvx" Jan 21 00:52:00 crc kubenswrapper[5118]: I0121 00:52:00.488243 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482612-srpvx" Jan 21 00:52:00 crc kubenswrapper[5118]: I0121 00:52:00.956155 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29482612-srpvx"] Jan 21 00:52:00 crc kubenswrapper[5118]: W0121 00:52:00.965096 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod83c9d4ec_b4fa_4572_a111_bdc4e5afab4f.slice/crio-ea696683f5f11527c30b984733fb061ca9632578718f9ae11de5350025952ed6 WatchSource:0}: Error finding container ea696683f5f11527c30b984733fb061ca9632578718f9ae11de5350025952ed6: Status 404 returned error can't find the container with id ea696683f5f11527c30b984733fb061ca9632578718f9ae11de5350025952ed6 Jan 21 00:52:01 crc kubenswrapper[5118]: I0121 00:52:01.185792 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29482612-srpvx" event={"ID":"83c9d4ec-b4fa-4572-a111-bdc4e5afab4f","Type":"ContainerStarted","Data":"ea696683f5f11527c30b984733fb061ca9632578718f9ae11de5350025952ed6"} Jan 21 00:52:03 crc kubenswrapper[5118]: I0121 00:52:03.206624 5118 generic.go:358] "Generic (PLEG): container finished" podID="83c9d4ec-b4fa-4572-a111-bdc4e5afab4f" containerID="782c36dae3e00e012e60f0c30eb0a514b9bdfbaa83b1ec242a7e8a53034f6899" exitCode=0 Jan 21 00:52:03 crc kubenswrapper[5118]: I0121 00:52:03.206677 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29482612-srpvx" event={"ID":"83c9d4ec-b4fa-4572-a111-bdc4e5afab4f","Type":"ContainerDied","Data":"782c36dae3e00e012e60f0c30eb0a514b9bdfbaa83b1ec242a7e8a53034f6899"} Jan 21 00:52:04 crc kubenswrapper[5118]: I0121 00:52:04.567306 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482612-srpvx" Jan 21 00:52:04 crc kubenswrapper[5118]: I0121 00:52:04.729395 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t56h7\" (UniqueName: \"kubernetes.io/projected/83c9d4ec-b4fa-4572-a111-bdc4e5afab4f-kube-api-access-t56h7\") pod \"83c9d4ec-b4fa-4572-a111-bdc4e5afab4f\" (UID: \"83c9d4ec-b4fa-4572-a111-bdc4e5afab4f\") " Jan 21 00:52:04 crc kubenswrapper[5118]: I0121 00:52:04.740301 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83c9d4ec-b4fa-4572-a111-bdc4e5afab4f-kube-api-access-t56h7" (OuterVolumeSpecName: "kube-api-access-t56h7") pod "83c9d4ec-b4fa-4572-a111-bdc4e5afab4f" (UID: "83c9d4ec-b4fa-4572-a111-bdc4e5afab4f"). InnerVolumeSpecName "kube-api-access-t56h7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:52:04 crc kubenswrapper[5118]: I0121 00:52:04.831809 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-t56h7\" (UniqueName: \"kubernetes.io/projected/83c9d4ec-b4fa-4572-a111-bdc4e5afab4f-kube-api-access-t56h7\") on node \"crc\" DevicePath \"\"" Jan 21 00:52:05 crc kubenswrapper[5118]: I0121 00:52:05.228920 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482612-srpvx" Jan 21 00:52:05 crc kubenswrapper[5118]: I0121 00:52:05.228960 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29482612-srpvx" event={"ID":"83c9d4ec-b4fa-4572-a111-bdc4e5afab4f","Type":"ContainerDied","Data":"ea696683f5f11527c30b984733fb061ca9632578718f9ae11de5350025952ed6"} Jan 21 00:52:05 crc kubenswrapper[5118]: I0121 00:52:05.229005 5118 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea696683f5f11527c30b984733fb061ca9632578718f9ae11de5350025952ed6" Jan 21 00:52:05 crc kubenswrapper[5118]: I0121 00:52:05.666882 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29482606-wckr9"] Jan 21 00:52:05 crc kubenswrapper[5118]: I0121 00:52:05.702804 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29482606-wckr9"] Jan 21 00:52:06 crc kubenswrapper[5118]: I0121 00:52:06.987047 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2cafbb17-ce21-411f-bab3-cc0bb0fdbf61" path="/var/lib/kubelet/pods/2cafbb17-ce21-411f-bab3-cc0bb0fdbf61/volumes" Jan 21 00:52:08 crc kubenswrapper[5118]: I0121 00:52:08.569586 5118 scope.go:117] "RemoveContainer" containerID="8f8d4087e78c385391abda6f2e54424e7d7d653fc1a080480ba9e41cffdee4d0" Jan 21 00:52:09 crc kubenswrapper[5118]: I0121 00:52:09.975324 5118 scope.go:117] "RemoveContainer" containerID="1169db6a3b25445bfb02507cef724d6fd0c2d9da9b3393a9bf487e98f8ffe67f" Jan 21 00:52:09 crc kubenswrapper[5118]: E0121 00:52:09.976064 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-22r9n_openshift-machine-config-operator(44eb9bc7-60a3-421c-bf5e-d1d9a5026435)\"" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" Jan 21 00:52:23 crc kubenswrapper[5118]: I0121 00:52:23.975997 5118 scope.go:117] "RemoveContainer" containerID="1169db6a3b25445bfb02507cef724d6fd0c2d9da9b3393a9bf487e98f8ffe67f" Jan 21 00:52:23 crc kubenswrapper[5118]: E0121 00:52:23.978024 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-22r9n_openshift-machine-config-operator(44eb9bc7-60a3-421c-bf5e-d1d9a5026435)\"" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" Jan 21 00:52:34 crc kubenswrapper[5118]: I0121 00:52:34.988378 5118 scope.go:117] "RemoveContainer" containerID="1169db6a3b25445bfb02507cef724d6fd0c2d9da9b3393a9bf487e98f8ffe67f" Jan 21 00:52:34 crc kubenswrapper[5118]: E0121 00:52:34.989647 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-22r9n_openshift-machine-config-operator(44eb9bc7-60a3-421c-bf5e-d1d9a5026435)\"" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" Jan 21 00:52:45 crc kubenswrapper[5118]: I0121 00:52:45.977121 5118 scope.go:117] "RemoveContainer" containerID="1169db6a3b25445bfb02507cef724d6fd0c2d9da9b3393a9bf487e98f8ffe67f" Jan 21 00:52:45 crc kubenswrapper[5118]: E0121 00:52:45.978045 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-22r9n_openshift-machine-config-operator(44eb9bc7-60a3-421c-bf5e-d1d9a5026435)\"" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" Jan 21 00:52:58 crc kubenswrapper[5118]: I0121 00:52:58.978547 5118 scope.go:117] "RemoveContainer" containerID="1169db6a3b25445bfb02507cef724d6fd0c2d9da9b3393a9bf487e98f8ffe67f" Jan 21 00:52:58 crc kubenswrapper[5118]: E0121 00:52:58.979811 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-22r9n_openshift-machine-config-operator(44eb9bc7-60a3-421c-bf5e-d1d9a5026435)\"" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" Jan 21 00:53:12 crc kubenswrapper[5118]: I0121 00:53:12.975933 5118 scope.go:117] "RemoveContainer" containerID="1169db6a3b25445bfb02507cef724d6fd0c2d9da9b3393a9bf487e98f8ffe67f" Jan 21 00:53:12 crc kubenswrapper[5118]: E0121 00:53:12.977436 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-22r9n_openshift-machine-config-operator(44eb9bc7-60a3-421c-bf5e-d1d9a5026435)\"" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" Jan 21 00:53:25 crc kubenswrapper[5118]: I0121 00:53:25.976581 5118 scope.go:117] "RemoveContainer" containerID="1169db6a3b25445bfb02507cef724d6fd0c2d9da9b3393a9bf487e98f8ffe67f" Jan 21 00:53:25 crc kubenswrapper[5118]: E0121 00:53:25.977591 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-22r9n_openshift-machine-config-operator(44eb9bc7-60a3-421c-bf5e-d1d9a5026435)\"" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" Jan 21 00:53:37 crc kubenswrapper[5118]: I0121 00:53:37.975280 5118 scope.go:117] "RemoveContainer" containerID="1169db6a3b25445bfb02507cef724d6fd0c2d9da9b3393a9bf487e98f8ffe67f" Jan 21 00:53:37 crc kubenswrapper[5118]: E0121 00:53:37.976043 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-22r9n_openshift-machine-config-operator(44eb9bc7-60a3-421c-bf5e-d1d9a5026435)\"" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" Jan 21 00:53:41 crc kubenswrapper[5118]: I0121 00:53:41.127556 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9lm5z"] Jan 21 00:53:41 crc kubenswrapper[5118]: I0121 00:53:41.129860 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="83c9d4ec-b4fa-4572-a111-bdc4e5afab4f" containerName="oc" Jan 21 00:53:41 crc kubenswrapper[5118]: I0121 00:53:41.129900 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="83c9d4ec-b4fa-4572-a111-bdc4e5afab4f" containerName="oc" Jan 21 00:53:41 crc kubenswrapper[5118]: I0121 00:53:41.130261 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="83c9d4ec-b4fa-4572-a111-bdc4e5afab4f" containerName="oc" Jan 21 00:53:41 crc kubenswrapper[5118]: I0121 00:53:41.739220 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9lm5z"] Jan 21 00:53:41 crc kubenswrapper[5118]: I0121 00:53:41.739525 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9lm5z" Jan 21 00:53:41 crc kubenswrapper[5118]: I0121 00:53:41.844741 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3df8946b-3e8d-49e1-b746-57bc4b2dfd25-catalog-content\") pod \"redhat-operators-9lm5z\" (UID: \"3df8946b-3e8d-49e1-b746-57bc4b2dfd25\") " pod="openshift-marketplace/redhat-operators-9lm5z" Jan 21 00:53:41 crc kubenswrapper[5118]: I0121 00:53:41.844828 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxbgf\" (UniqueName: \"kubernetes.io/projected/3df8946b-3e8d-49e1-b746-57bc4b2dfd25-kube-api-access-jxbgf\") pod \"redhat-operators-9lm5z\" (UID: \"3df8946b-3e8d-49e1-b746-57bc4b2dfd25\") " pod="openshift-marketplace/redhat-operators-9lm5z" Jan 21 00:53:41 crc kubenswrapper[5118]: I0121 00:53:41.845256 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3df8946b-3e8d-49e1-b746-57bc4b2dfd25-utilities\") pod \"redhat-operators-9lm5z\" (UID: \"3df8946b-3e8d-49e1-b746-57bc4b2dfd25\") " pod="openshift-marketplace/redhat-operators-9lm5z" Jan 21 00:53:41 crc kubenswrapper[5118]: I0121 00:53:41.946639 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3df8946b-3e8d-49e1-b746-57bc4b2dfd25-catalog-content\") pod \"redhat-operators-9lm5z\" (UID: \"3df8946b-3e8d-49e1-b746-57bc4b2dfd25\") " pod="openshift-marketplace/redhat-operators-9lm5z" Jan 21 00:53:41 crc kubenswrapper[5118]: I0121 00:53:41.946705 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jxbgf\" (UniqueName: \"kubernetes.io/projected/3df8946b-3e8d-49e1-b746-57bc4b2dfd25-kube-api-access-jxbgf\") pod \"redhat-operators-9lm5z\" (UID: \"3df8946b-3e8d-49e1-b746-57bc4b2dfd25\") " pod="openshift-marketplace/redhat-operators-9lm5z" Jan 21 00:53:41 crc kubenswrapper[5118]: I0121 00:53:41.946872 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3df8946b-3e8d-49e1-b746-57bc4b2dfd25-utilities\") pod \"redhat-operators-9lm5z\" (UID: \"3df8946b-3e8d-49e1-b746-57bc4b2dfd25\") " pod="openshift-marketplace/redhat-operators-9lm5z" Jan 21 00:53:41 crc kubenswrapper[5118]: I0121 00:53:41.947823 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3df8946b-3e8d-49e1-b746-57bc4b2dfd25-utilities\") pod \"redhat-operators-9lm5z\" (UID: \"3df8946b-3e8d-49e1-b746-57bc4b2dfd25\") " pod="openshift-marketplace/redhat-operators-9lm5z" Jan 21 00:53:41 crc kubenswrapper[5118]: I0121 00:53:41.947818 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3df8946b-3e8d-49e1-b746-57bc4b2dfd25-catalog-content\") pod \"redhat-operators-9lm5z\" (UID: \"3df8946b-3e8d-49e1-b746-57bc4b2dfd25\") " pod="openshift-marketplace/redhat-operators-9lm5z" Jan 21 00:53:41 crc kubenswrapper[5118]: I0121 00:53:41.978809 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxbgf\" (UniqueName: \"kubernetes.io/projected/3df8946b-3e8d-49e1-b746-57bc4b2dfd25-kube-api-access-jxbgf\") pod \"redhat-operators-9lm5z\" (UID: \"3df8946b-3e8d-49e1-b746-57bc4b2dfd25\") " pod="openshift-marketplace/redhat-operators-9lm5z" Jan 21 00:53:42 crc kubenswrapper[5118]: I0121 00:53:42.061742 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9lm5z" Jan 21 00:53:42 crc kubenswrapper[5118]: I0121 00:53:42.516440 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9lm5z"] Jan 21 00:53:42 crc kubenswrapper[5118]: I0121 00:53:42.525661 5118 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 00:53:43 crc kubenswrapper[5118]: I0121 00:53:43.263515 5118 generic.go:358] "Generic (PLEG): container finished" podID="3df8946b-3e8d-49e1-b746-57bc4b2dfd25" containerID="58cea7e66237e8ef4495f8cb7abc581b63342b89b706ff0d634681e440f446ef" exitCode=0 Jan 21 00:53:43 crc kubenswrapper[5118]: I0121 00:53:43.263659 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9lm5z" event={"ID":"3df8946b-3e8d-49e1-b746-57bc4b2dfd25","Type":"ContainerDied","Data":"58cea7e66237e8ef4495f8cb7abc581b63342b89b706ff0d634681e440f446ef"} Jan 21 00:53:43 crc kubenswrapper[5118]: I0121 00:53:43.263688 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9lm5z" event={"ID":"3df8946b-3e8d-49e1-b746-57bc4b2dfd25","Type":"ContainerStarted","Data":"105b102d313284ace6ad72651f0f27840b3605fda6a3b289d347b886e9f9ac6b"} Jan 21 00:53:44 crc kubenswrapper[5118]: I0121 00:53:44.276559 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9lm5z" event={"ID":"3df8946b-3e8d-49e1-b746-57bc4b2dfd25","Type":"ContainerStarted","Data":"ca9324410f95783f394b1ab611e3ed00e7ce711097358e0a9a8fee4f3df2ad33"} Jan 21 00:53:45 crc kubenswrapper[5118]: I0121 00:53:45.285757 5118 generic.go:358] "Generic (PLEG): container finished" podID="3df8946b-3e8d-49e1-b746-57bc4b2dfd25" containerID="ca9324410f95783f394b1ab611e3ed00e7ce711097358e0a9a8fee4f3df2ad33" exitCode=0 Jan 21 00:53:45 crc kubenswrapper[5118]: I0121 00:53:45.286046 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9lm5z" event={"ID":"3df8946b-3e8d-49e1-b746-57bc4b2dfd25","Type":"ContainerDied","Data":"ca9324410f95783f394b1ab611e3ed00e7ce711097358e0a9a8fee4f3df2ad33"} Jan 21 00:53:46 crc kubenswrapper[5118]: I0121 00:53:46.295853 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9lm5z" event={"ID":"3df8946b-3e8d-49e1-b746-57bc4b2dfd25","Type":"ContainerStarted","Data":"68b604f20eb6a529123b63581247a2c201637708cea9cb1a81a3c632f6bd7098"} Jan 21 00:53:46 crc kubenswrapper[5118]: I0121 00:53:46.325576 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9lm5z" podStartSLOduration=4.520226496 podStartE2EDuration="5.325552127s" podCreationTimestamp="2026-01-21 00:53:41 +0000 UTC" firstStartedPulling="2026-01-21 00:53:43.264425875 +0000 UTC m=+2678.588672893" lastFinishedPulling="2026-01-21 00:53:44.069751486 +0000 UTC m=+2679.393998524" observedRunningTime="2026-01-21 00:53:46.321291814 +0000 UTC m=+2681.645538842" watchObservedRunningTime="2026-01-21 00:53:46.325552127 +0000 UTC m=+2681.649799165" Jan 21 00:53:50 crc kubenswrapper[5118]: I0121 00:53:50.976207 5118 scope.go:117] "RemoveContainer" containerID="1169db6a3b25445bfb02507cef724d6fd0c2d9da9b3393a9bf487e98f8ffe67f" Jan 21 00:53:50 crc kubenswrapper[5118]: E0121 00:53:50.976939 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-22r9n_openshift-machine-config-operator(44eb9bc7-60a3-421c-bf5e-d1d9a5026435)\"" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" Jan 21 00:53:52 crc kubenswrapper[5118]: I0121 00:53:52.062663 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9lm5z" Jan 21 00:53:52 crc kubenswrapper[5118]: I0121 00:53:52.062769 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-9lm5z" Jan 21 00:53:53 crc kubenswrapper[5118]: I0121 00:53:53.129972 5118 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9lm5z" podUID="3df8946b-3e8d-49e1-b746-57bc4b2dfd25" containerName="registry-server" probeResult="failure" output=< Jan 21 00:53:53 crc kubenswrapper[5118]: timeout: failed to connect service ":50051" within 1s Jan 21 00:53:53 crc kubenswrapper[5118]: > Jan 21 00:54:00 crc kubenswrapper[5118]: I0121 00:54:00.140596 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29482614-t5wjm"] Jan 21 00:54:01 crc kubenswrapper[5118]: I0121 00:54:01.214374 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482614-t5wjm" Jan 21 00:54:01 crc kubenswrapper[5118]: I0121 00:54:01.220534 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 00:54:01 crc kubenswrapper[5118]: I0121 00:54:01.220789 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 00:54:01 crc kubenswrapper[5118]: I0121 00:54:01.221040 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-tk7qc\"" Jan 21 00:54:01 crc kubenswrapper[5118]: I0121 00:54:01.231397 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29482614-t5wjm"] Jan 21 00:54:01 crc kubenswrapper[5118]: I0121 00:54:01.301262 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxqg8\" (UniqueName: \"kubernetes.io/projected/46899737-0ca5-423e-bf0c-8f057c937e33-kube-api-access-bxqg8\") pod \"auto-csr-approver-29482614-t5wjm\" (UID: \"46899737-0ca5-423e-bf0c-8f057c937e33\") " pod="openshift-infra/auto-csr-approver-29482614-t5wjm" Jan 21 00:54:01 crc kubenswrapper[5118]: I0121 00:54:01.402559 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bxqg8\" (UniqueName: \"kubernetes.io/projected/46899737-0ca5-423e-bf0c-8f057c937e33-kube-api-access-bxqg8\") pod \"auto-csr-approver-29482614-t5wjm\" (UID: \"46899737-0ca5-423e-bf0c-8f057c937e33\") " pod="openshift-infra/auto-csr-approver-29482614-t5wjm" Jan 21 00:54:01 crc kubenswrapper[5118]: I0121 00:54:01.430371 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxqg8\" (UniqueName: \"kubernetes.io/projected/46899737-0ca5-423e-bf0c-8f057c937e33-kube-api-access-bxqg8\") pod \"auto-csr-approver-29482614-t5wjm\" (UID: \"46899737-0ca5-423e-bf0c-8f057c937e33\") " pod="openshift-infra/auto-csr-approver-29482614-t5wjm" Jan 21 00:54:01 crc kubenswrapper[5118]: I0121 00:54:01.543647 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482614-t5wjm" Jan 21 00:54:01 crc kubenswrapper[5118]: I0121 00:54:01.794556 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29482614-t5wjm"] Jan 21 00:54:01 crc kubenswrapper[5118]: W0121 00:54:01.802057 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod46899737_0ca5_423e_bf0c_8f057c937e33.slice/crio-92ec8a764e65a31d207e4e8437e7bbf055e76be6232c7dd444f2514ee1a46ac4 WatchSource:0}: Error finding container 92ec8a764e65a31d207e4e8437e7bbf055e76be6232c7dd444f2514ee1a46ac4: Status 404 returned error can't find the container with id 92ec8a764e65a31d207e4e8437e7bbf055e76be6232c7dd444f2514ee1a46ac4 Jan 21 00:54:02 crc kubenswrapper[5118]: I0121 00:54:02.115095 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9lm5z" Jan 21 00:54:02 crc kubenswrapper[5118]: I0121 00:54:02.156769 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9lm5z" Jan 21 00:54:02 crc kubenswrapper[5118]: I0121 00:54:02.431938 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29482614-t5wjm" event={"ID":"46899737-0ca5-423e-bf0c-8f057c937e33","Type":"ContainerStarted","Data":"92ec8a764e65a31d207e4e8437e7bbf055e76be6232c7dd444f2514ee1a46ac4"} Jan 21 00:54:02 crc kubenswrapper[5118]: I0121 00:54:02.976544 5118 scope.go:117] "RemoveContainer" containerID="1169db6a3b25445bfb02507cef724d6fd0c2d9da9b3393a9bf487e98f8ffe67f" Jan 21 00:54:02 crc kubenswrapper[5118]: E0121 00:54:02.977928 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-22r9n_openshift-machine-config-operator(44eb9bc7-60a3-421c-bf5e-d1d9a5026435)\"" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" Jan 21 00:54:03 crc kubenswrapper[5118]: I0121 00:54:03.440138 5118 generic.go:358] "Generic (PLEG): container finished" podID="46899737-0ca5-423e-bf0c-8f057c937e33" containerID="d4787b6abdc6b03481491be9acb58eceab56d36eae34995a3360cc206aeff10a" exitCode=0 Jan 21 00:54:03 crc kubenswrapper[5118]: I0121 00:54:03.440399 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29482614-t5wjm" event={"ID":"46899737-0ca5-423e-bf0c-8f057c937e33","Type":"ContainerDied","Data":"d4787b6abdc6b03481491be9acb58eceab56d36eae34995a3360cc206aeff10a"} Jan 21 00:54:03 crc kubenswrapper[5118]: I0121 00:54:03.536239 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9lm5z"] Jan 21 00:54:03 crc kubenswrapper[5118]: I0121 00:54:03.536654 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9lm5z" podUID="3df8946b-3e8d-49e1-b746-57bc4b2dfd25" containerName="registry-server" containerID="cri-o://68b604f20eb6a529123b63581247a2c201637708cea9cb1a81a3c632f6bd7098" gracePeriod=2 Jan 21 00:54:03 crc kubenswrapper[5118]: I0121 00:54:03.957198 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9lm5z" Jan 21 00:54:04 crc kubenswrapper[5118]: I0121 00:54:04.073290 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jxbgf\" (UniqueName: \"kubernetes.io/projected/3df8946b-3e8d-49e1-b746-57bc4b2dfd25-kube-api-access-jxbgf\") pod \"3df8946b-3e8d-49e1-b746-57bc4b2dfd25\" (UID: \"3df8946b-3e8d-49e1-b746-57bc4b2dfd25\") " Jan 21 00:54:04 crc kubenswrapper[5118]: I0121 00:54:04.073590 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3df8946b-3e8d-49e1-b746-57bc4b2dfd25-utilities\") pod \"3df8946b-3e8d-49e1-b746-57bc4b2dfd25\" (UID: \"3df8946b-3e8d-49e1-b746-57bc4b2dfd25\") " Jan 21 00:54:04 crc kubenswrapper[5118]: I0121 00:54:04.073800 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3df8946b-3e8d-49e1-b746-57bc4b2dfd25-catalog-content\") pod \"3df8946b-3e8d-49e1-b746-57bc4b2dfd25\" (UID: \"3df8946b-3e8d-49e1-b746-57bc4b2dfd25\") " Jan 21 00:54:04 crc kubenswrapper[5118]: I0121 00:54:04.075340 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3df8946b-3e8d-49e1-b746-57bc4b2dfd25-utilities" (OuterVolumeSpecName: "utilities") pod "3df8946b-3e8d-49e1-b746-57bc4b2dfd25" (UID: "3df8946b-3e8d-49e1-b746-57bc4b2dfd25"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 00:54:04 crc kubenswrapper[5118]: I0121 00:54:04.080356 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3df8946b-3e8d-49e1-b746-57bc4b2dfd25-kube-api-access-jxbgf" (OuterVolumeSpecName: "kube-api-access-jxbgf") pod "3df8946b-3e8d-49e1-b746-57bc4b2dfd25" (UID: "3df8946b-3e8d-49e1-b746-57bc4b2dfd25"). InnerVolumeSpecName "kube-api-access-jxbgf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:54:04 crc kubenswrapper[5118]: I0121 00:54:04.175932 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jxbgf\" (UniqueName: \"kubernetes.io/projected/3df8946b-3e8d-49e1-b746-57bc4b2dfd25-kube-api-access-jxbgf\") on node \"crc\" DevicePath \"\"" Jan 21 00:54:04 crc kubenswrapper[5118]: I0121 00:54:04.175972 5118 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3df8946b-3e8d-49e1-b746-57bc4b2dfd25-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 00:54:04 crc kubenswrapper[5118]: I0121 00:54:04.187268 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3df8946b-3e8d-49e1-b746-57bc4b2dfd25-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3df8946b-3e8d-49e1-b746-57bc4b2dfd25" (UID: "3df8946b-3e8d-49e1-b746-57bc4b2dfd25"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 00:54:04 crc kubenswrapper[5118]: I0121 00:54:04.277374 5118 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3df8946b-3e8d-49e1-b746-57bc4b2dfd25-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 00:54:04 crc kubenswrapper[5118]: I0121 00:54:04.448334 5118 generic.go:358] "Generic (PLEG): container finished" podID="3df8946b-3e8d-49e1-b746-57bc4b2dfd25" containerID="68b604f20eb6a529123b63581247a2c201637708cea9cb1a81a3c632f6bd7098" exitCode=0 Jan 21 00:54:04 crc kubenswrapper[5118]: I0121 00:54:04.448432 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9lm5z" Jan 21 00:54:04 crc kubenswrapper[5118]: I0121 00:54:04.448478 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9lm5z" event={"ID":"3df8946b-3e8d-49e1-b746-57bc4b2dfd25","Type":"ContainerDied","Data":"68b604f20eb6a529123b63581247a2c201637708cea9cb1a81a3c632f6bd7098"} Jan 21 00:54:04 crc kubenswrapper[5118]: I0121 00:54:04.448506 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9lm5z" event={"ID":"3df8946b-3e8d-49e1-b746-57bc4b2dfd25","Type":"ContainerDied","Data":"105b102d313284ace6ad72651f0f27840b3605fda6a3b289d347b886e9f9ac6b"} Jan 21 00:54:04 crc kubenswrapper[5118]: I0121 00:54:04.448522 5118 scope.go:117] "RemoveContainer" containerID="68b604f20eb6a529123b63581247a2c201637708cea9cb1a81a3c632f6bd7098" Jan 21 00:54:04 crc kubenswrapper[5118]: I0121 00:54:04.471366 5118 scope.go:117] "RemoveContainer" containerID="ca9324410f95783f394b1ab611e3ed00e7ce711097358e0a9a8fee4f3df2ad33" Jan 21 00:54:04 crc kubenswrapper[5118]: I0121 00:54:04.508231 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9lm5z"] Jan 21 00:54:04 crc kubenswrapper[5118]: I0121 00:54:04.511499 5118 scope.go:117] "RemoveContainer" containerID="58cea7e66237e8ef4495f8cb7abc581b63342b89b706ff0d634681e440f446ef" Jan 21 00:54:04 crc kubenswrapper[5118]: I0121 00:54:04.515909 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9lm5z"] Jan 21 00:54:04 crc kubenswrapper[5118]: I0121 00:54:04.537374 5118 scope.go:117] "RemoveContainer" containerID="68b604f20eb6a529123b63581247a2c201637708cea9cb1a81a3c632f6bd7098" Jan 21 00:54:04 crc kubenswrapper[5118]: E0121 00:54:04.538008 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68b604f20eb6a529123b63581247a2c201637708cea9cb1a81a3c632f6bd7098\": container with ID starting with 68b604f20eb6a529123b63581247a2c201637708cea9cb1a81a3c632f6bd7098 not found: ID does not exist" containerID="68b604f20eb6a529123b63581247a2c201637708cea9cb1a81a3c632f6bd7098" Jan 21 00:54:04 crc kubenswrapper[5118]: I0121 00:54:04.538064 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68b604f20eb6a529123b63581247a2c201637708cea9cb1a81a3c632f6bd7098"} err="failed to get container status \"68b604f20eb6a529123b63581247a2c201637708cea9cb1a81a3c632f6bd7098\": rpc error: code = NotFound desc = could not find container \"68b604f20eb6a529123b63581247a2c201637708cea9cb1a81a3c632f6bd7098\": container with ID starting with 68b604f20eb6a529123b63581247a2c201637708cea9cb1a81a3c632f6bd7098 not found: ID does not exist" Jan 21 00:54:04 crc kubenswrapper[5118]: I0121 00:54:04.538090 5118 scope.go:117] "RemoveContainer" containerID="ca9324410f95783f394b1ab611e3ed00e7ce711097358e0a9a8fee4f3df2ad33" Jan 21 00:54:04 crc kubenswrapper[5118]: E0121 00:54:04.540178 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca9324410f95783f394b1ab611e3ed00e7ce711097358e0a9a8fee4f3df2ad33\": container with ID starting with ca9324410f95783f394b1ab611e3ed00e7ce711097358e0a9a8fee4f3df2ad33 not found: ID does not exist" containerID="ca9324410f95783f394b1ab611e3ed00e7ce711097358e0a9a8fee4f3df2ad33" Jan 21 00:54:04 crc kubenswrapper[5118]: I0121 00:54:04.540298 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca9324410f95783f394b1ab611e3ed00e7ce711097358e0a9a8fee4f3df2ad33"} err="failed to get container status \"ca9324410f95783f394b1ab611e3ed00e7ce711097358e0a9a8fee4f3df2ad33\": rpc error: code = NotFound desc = could not find container \"ca9324410f95783f394b1ab611e3ed00e7ce711097358e0a9a8fee4f3df2ad33\": container with ID starting with ca9324410f95783f394b1ab611e3ed00e7ce711097358e0a9a8fee4f3df2ad33 not found: ID does not exist" Jan 21 00:54:04 crc kubenswrapper[5118]: I0121 00:54:04.540382 5118 scope.go:117] "RemoveContainer" containerID="58cea7e66237e8ef4495f8cb7abc581b63342b89b706ff0d634681e440f446ef" Jan 21 00:54:04 crc kubenswrapper[5118]: E0121 00:54:04.540922 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58cea7e66237e8ef4495f8cb7abc581b63342b89b706ff0d634681e440f446ef\": container with ID starting with 58cea7e66237e8ef4495f8cb7abc581b63342b89b706ff0d634681e440f446ef not found: ID does not exist" containerID="58cea7e66237e8ef4495f8cb7abc581b63342b89b706ff0d634681e440f446ef" Jan 21 00:54:04 crc kubenswrapper[5118]: I0121 00:54:04.541027 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58cea7e66237e8ef4495f8cb7abc581b63342b89b706ff0d634681e440f446ef"} err="failed to get container status \"58cea7e66237e8ef4495f8cb7abc581b63342b89b706ff0d634681e440f446ef\": rpc error: code = NotFound desc = could not find container \"58cea7e66237e8ef4495f8cb7abc581b63342b89b706ff0d634681e440f446ef\": container with ID starting with 58cea7e66237e8ef4495f8cb7abc581b63342b89b706ff0d634681e440f446ef not found: ID does not exist" Jan 21 00:54:04 crc kubenswrapper[5118]: I0121 00:54:04.673022 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482614-t5wjm" Jan 21 00:54:04 crc kubenswrapper[5118]: I0121 00:54:04.782856 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bxqg8\" (UniqueName: \"kubernetes.io/projected/46899737-0ca5-423e-bf0c-8f057c937e33-kube-api-access-bxqg8\") pod \"46899737-0ca5-423e-bf0c-8f057c937e33\" (UID: \"46899737-0ca5-423e-bf0c-8f057c937e33\") " Jan 21 00:54:04 crc kubenswrapper[5118]: I0121 00:54:04.788294 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46899737-0ca5-423e-bf0c-8f057c937e33-kube-api-access-bxqg8" (OuterVolumeSpecName: "kube-api-access-bxqg8") pod "46899737-0ca5-423e-bf0c-8f057c937e33" (UID: "46899737-0ca5-423e-bf0c-8f057c937e33"). InnerVolumeSpecName "kube-api-access-bxqg8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:54:04 crc kubenswrapper[5118]: I0121 00:54:04.884114 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bxqg8\" (UniqueName: \"kubernetes.io/projected/46899737-0ca5-423e-bf0c-8f057c937e33-kube-api-access-bxqg8\") on node \"crc\" DevicePath \"\"" Jan 21 00:54:04 crc kubenswrapper[5118]: I0121 00:54:04.987590 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3df8946b-3e8d-49e1-b746-57bc4b2dfd25" path="/var/lib/kubelet/pods/3df8946b-3e8d-49e1-b746-57bc4b2dfd25/volumes" Jan 21 00:54:05 crc kubenswrapper[5118]: I0121 00:54:05.464401 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482614-t5wjm" Jan 21 00:54:05 crc kubenswrapper[5118]: I0121 00:54:05.464471 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29482614-t5wjm" event={"ID":"46899737-0ca5-423e-bf0c-8f057c937e33","Type":"ContainerDied","Data":"92ec8a764e65a31d207e4e8437e7bbf055e76be6232c7dd444f2514ee1a46ac4"} Jan 21 00:54:05 crc kubenswrapper[5118]: I0121 00:54:05.465960 5118 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="92ec8a764e65a31d207e4e8437e7bbf055e76be6232c7dd444f2514ee1a46ac4" Jan 21 00:54:05 crc kubenswrapper[5118]: I0121 00:54:05.752384 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29482608-6rbcr"] Jan 21 00:54:05 crc kubenswrapper[5118]: I0121 00:54:05.758295 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29482608-6rbcr"] Jan 21 00:54:06 crc kubenswrapper[5118]: I0121 00:54:06.050915 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qcqwq_7c0390f5-26b4-4299-958c-acac058be619/kube-multus/0.log" Jan 21 00:54:06 crc kubenswrapper[5118]: I0121 00:54:06.051096 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qcqwq_7c0390f5-26b4-4299-958c-acac058be619/kube-multus/0.log" Jan 21 00:54:06 crc kubenswrapper[5118]: I0121 00:54:06.064897 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 21 00:54:06 crc kubenswrapper[5118]: I0121 00:54:06.065367 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 21 00:54:06 crc kubenswrapper[5118]: I0121 00:54:06.986449 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6831a76a-6c97-495d-8f1b-51173c10abbb" path="/var/lib/kubelet/pods/6831a76a-6c97-495d-8f1b-51173c10abbb/volumes" Jan 21 00:54:08 crc kubenswrapper[5118]: I0121 00:54:08.730279 5118 scope.go:117] "RemoveContainer" containerID="40c738d9ced3497ee9e8838696a6b396f879c93064383f1745230e6801c7585c" Jan 21 00:54:13 crc kubenswrapper[5118]: I0121 00:54:13.976430 5118 scope.go:117] "RemoveContainer" containerID="1169db6a3b25445bfb02507cef724d6fd0c2d9da9b3393a9bf487e98f8ffe67f" Jan 21 00:54:13 crc kubenswrapper[5118]: E0121 00:54:13.979284 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-22r9n_openshift-machine-config-operator(44eb9bc7-60a3-421c-bf5e-d1d9a5026435)\"" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" Jan 21 00:54:27 crc kubenswrapper[5118]: I0121 00:54:27.976118 5118 scope.go:117] "RemoveContainer" containerID="1169db6a3b25445bfb02507cef724d6fd0c2d9da9b3393a9bf487e98f8ffe67f" Jan 21 00:54:27 crc kubenswrapper[5118]: E0121 00:54:27.977306 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-22r9n_openshift-machine-config-operator(44eb9bc7-60a3-421c-bf5e-d1d9a5026435)\"" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" Jan 21 00:54:38 crc kubenswrapper[5118]: I0121 00:54:38.988949 5118 scope.go:117] "RemoveContainer" containerID="1169db6a3b25445bfb02507cef724d6fd0c2d9da9b3393a9bf487e98f8ffe67f" Jan 21 00:54:38 crc kubenswrapper[5118]: E0121 00:54:38.991055 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-22r9n_openshift-machine-config-operator(44eb9bc7-60a3-421c-bf5e-d1d9a5026435)\"" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" Jan 21 00:54:49 crc kubenswrapper[5118]: I0121 00:54:49.976494 5118 scope.go:117] "RemoveContainer" containerID="1169db6a3b25445bfb02507cef724d6fd0c2d9da9b3393a9bf487e98f8ffe67f" Jan 21 00:54:49 crc kubenswrapper[5118]: E0121 00:54:49.977796 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-22r9n_openshift-machine-config-operator(44eb9bc7-60a3-421c-bf5e-d1d9a5026435)\"" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" Jan 21 00:55:03 crc kubenswrapper[5118]: I0121 00:55:03.975620 5118 scope.go:117] "RemoveContainer" containerID="1169db6a3b25445bfb02507cef724d6fd0c2d9da9b3393a9bf487e98f8ffe67f" Jan 21 00:55:03 crc kubenswrapper[5118]: E0121 00:55:03.976505 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-22r9n_openshift-machine-config-operator(44eb9bc7-60a3-421c-bf5e-d1d9a5026435)\"" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" Jan 21 00:55:14 crc kubenswrapper[5118]: I0121 00:55:14.983553 5118 scope.go:117] "RemoveContainer" containerID="1169db6a3b25445bfb02507cef724d6fd0c2d9da9b3393a9bf487e98f8ffe67f" Jan 21 00:55:14 crc kubenswrapper[5118]: E0121 00:55:14.984183 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-22r9n_openshift-machine-config-operator(44eb9bc7-60a3-421c-bf5e-d1d9a5026435)\"" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" Jan 21 00:55:29 crc kubenswrapper[5118]: I0121 00:55:29.976223 5118 scope.go:117] "RemoveContainer" containerID="1169db6a3b25445bfb02507cef724d6fd0c2d9da9b3393a9bf487e98f8ffe67f" Jan 21 00:55:29 crc kubenswrapper[5118]: E0121 00:55:29.977137 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-22r9n_openshift-machine-config-operator(44eb9bc7-60a3-421c-bf5e-d1d9a5026435)\"" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" Jan 21 00:55:42 crc kubenswrapper[5118]: I0121 00:55:42.976193 5118 scope.go:117] "RemoveContainer" containerID="1169db6a3b25445bfb02507cef724d6fd0c2d9da9b3393a9bf487e98f8ffe67f" Jan 21 00:55:43 crc kubenswrapper[5118]: I0121 00:55:43.384136 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" event={"ID":"44eb9bc7-60a3-421c-bf5e-d1d9a5026435","Type":"ContainerStarted","Data":"65522b58c4e66c6b9d619159bfd79bf25c3edfc786d498146b82854125a4f813"} Jan 21 00:56:00 crc kubenswrapper[5118]: I0121 00:56:00.151268 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29482616-qvvct"] Jan 21 00:56:00 crc kubenswrapper[5118]: I0121 00:56:00.153563 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3df8946b-3e8d-49e1-b746-57bc4b2dfd25" containerName="extract-utilities" Jan 21 00:56:00 crc kubenswrapper[5118]: I0121 00:56:00.153587 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="3df8946b-3e8d-49e1-b746-57bc4b2dfd25" containerName="extract-utilities" Jan 21 00:56:00 crc kubenswrapper[5118]: I0121 00:56:00.153814 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3df8946b-3e8d-49e1-b746-57bc4b2dfd25" containerName="registry-server" Jan 21 00:56:00 crc kubenswrapper[5118]: I0121 00:56:00.153831 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="3df8946b-3e8d-49e1-b746-57bc4b2dfd25" containerName="registry-server" Jan 21 00:56:00 crc kubenswrapper[5118]: I0121 00:56:00.155201 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="46899737-0ca5-423e-bf0c-8f057c937e33" containerName="oc" Jan 21 00:56:00 crc kubenswrapper[5118]: I0121 00:56:00.155228 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="46899737-0ca5-423e-bf0c-8f057c937e33" containerName="oc" Jan 21 00:56:00 crc kubenswrapper[5118]: I0121 00:56:00.155267 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3df8946b-3e8d-49e1-b746-57bc4b2dfd25" containerName="extract-content" Jan 21 00:56:00 crc kubenswrapper[5118]: I0121 00:56:00.155278 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="3df8946b-3e8d-49e1-b746-57bc4b2dfd25" containerName="extract-content" Jan 21 00:56:00 crc kubenswrapper[5118]: I0121 00:56:00.155495 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="46899737-0ca5-423e-bf0c-8f057c937e33" containerName="oc" Jan 21 00:56:00 crc kubenswrapper[5118]: I0121 00:56:00.155528 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="3df8946b-3e8d-49e1-b746-57bc4b2dfd25" containerName="registry-server" Jan 21 00:56:00 crc kubenswrapper[5118]: I0121 00:56:00.166584 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29482616-qvvct"] Jan 21 00:56:00 crc kubenswrapper[5118]: I0121 00:56:00.166776 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482616-qvvct" Jan 21 00:56:00 crc kubenswrapper[5118]: I0121 00:56:00.170113 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 00:56:00 crc kubenswrapper[5118]: I0121 00:56:00.170114 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 00:56:00 crc kubenswrapper[5118]: I0121 00:56:00.171035 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-tk7qc\"" Jan 21 00:56:00 crc kubenswrapper[5118]: I0121 00:56:00.253002 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5996\" (UniqueName: \"kubernetes.io/projected/60b8e072-9e63-4258-a607-54ff40b13e88-kube-api-access-q5996\") pod \"auto-csr-approver-29482616-qvvct\" (UID: \"60b8e072-9e63-4258-a607-54ff40b13e88\") " pod="openshift-infra/auto-csr-approver-29482616-qvvct" Jan 21 00:56:00 crc kubenswrapper[5118]: I0121 00:56:00.355406 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-q5996\" (UniqueName: \"kubernetes.io/projected/60b8e072-9e63-4258-a607-54ff40b13e88-kube-api-access-q5996\") pod \"auto-csr-approver-29482616-qvvct\" (UID: \"60b8e072-9e63-4258-a607-54ff40b13e88\") " pod="openshift-infra/auto-csr-approver-29482616-qvvct" Jan 21 00:56:00 crc kubenswrapper[5118]: I0121 00:56:00.390905 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5996\" (UniqueName: \"kubernetes.io/projected/60b8e072-9e63-4258-a607-54ff40b13e88-kube-api-access-q5996\") pod \"auto-csr-approver-29482616-qvvct\" (UID: \"60b8e072-9e63-4258-a607-54ff40b13e88\") " pod="openshift-infra/auto-csr-approver-29482616-qvvct" Jan 21 00:56:00 crc kubenswrapper[5118]: I0121 00:56:00.506110 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482616-qvvct" Jan 21 00:56:00 crc kubenswrapper[5118]: I0121 00:56:00.781513 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29482616-qvvct"] Jan 21 00:56:01 crc kubenswrapper[5118]: I0121 00:56:01.549506 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29482616-qvvct" event={"ID":"60b8e072-9e63-4258-a607-54ff40b13e88","Type":"ContainerStarted","Data":"c98ca5ef9cfe0a882f8859feaafe97dd735f77fe202f0f938d1c306812f9a7cd"} Jan 21 00:56:02 crc kubenswrapper[5118]: I0121 00:56:02.561707 5118 generic.go:358] "Generic (PLEG): container finished" podID="60b8e072-9e63-4258-a607-54ff40b13e88" containerID="9d37d3004789fd98b02027ef8b340526355526ee7c76aa0be8730d4941557340" exitCode=0 Jan 21 00:56:02 crc kubenswrapper[5118]: I0121 00:56:02.561930 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29482616-qvvct" event={"ID":"60b8e072-9e63-4258-a607-54ff40b13e88","Type":"ContainerDied","Data":"9d37d3004789fd98b02027ef8b340526355526ee7c76aa0be8730d4941557340"} Jan 21 00:56:03 crc kubenswrapper[5118]: I0121 00:56:03.810151 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482616-qvvct" Jan 21 00:56:03 crc kubenswrapper[5118]: I0121 00:56:03.924693 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q5996\" (UniqueName: \"kubernetes.io/projected/60b8e072-9e63-4258-a607-54ff40b13e88-kube-api-access-q5996\") pod \"60b8e072-9e63-4258-a607-54ff40b13e88\" (UID: \"60b8e072-9e63-4258-a607-54ff40b13e88\") " Jan 21 00:56:03 crc kubenswrapper[5118]: I0121 00:56:03.955520 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60b8e072-9e63-4258-a607-54ff40b13e88-kube-api-access-q5996" (OuterVolumeSpecName: "kube-api-access-q5996") pod "60b8e072-9e63-4258-a607-54ff40b13e88" (UID: "60b8e072-9e63-4258-a607-54ff40b13e88"). InnerVolumeSpecName "kube-api-access-q5996". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:56:04 crc kubenswrapper[5118]: I0121 00:56:04.026647 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q5996\" (UniqueName: \"kubernetes.io/projected/60b8e072-9e63-4258-a607-54ff40b13e88-kube-api-access-q5996\") on node \"crc\" DevicePath \"\"" Jan 21 00:56:04 crc kubenswrapper[5118]: I0121 00:56:04.582876 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482616-qvvct" Jan 21 00:56:04 crc kubenswrapper[5118]: I0121 00:56:04.582990 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29482616-qvvct" event={"ID":"60b8e072-9e63-4258-a607-54ff40b13e88","Type":"ContainerDied","Data":"c98ca5ef9cfe0a882f8859feaafe97dd735f77fe202f0f938d1c306812f9a7cd"} Jan 21 00:56:04 crc kubenswrapper[5118]: I0121 00:56:04.583030 5118 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c98ca5ef9cfe0a882f8859feaafe97dd735f77fe202f0f938d1c306812f9a7cd" Jan 21 00:56:04 crc kubenswrapper[5118]: I0121 00:56:04.897392 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29482610-ltfcp"] Jan 21 00:56:04 crc kubenswrapper[5118]: I0121 00:56:04.905416 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29482610-ltfcp"] Jan 21 00:56:04 crc kubenswrapper[5118]: I0121 00:56:04.990187 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="098036bf-94d5-47e6-819b-bb3012cb75a4" path="/var/lib/kubelet/pods/098036bf-94d5-47e6-819b-bb3012cb75a4/volumes" Jan 21 00:56:08 crc kubenswrapper[5118]: I0121 00:56:08.905423 5118 scope.go:117] "RemoveContainer" containerID="5686c7ae663dee93f51f2d2db1dc3fd307c2f161b36ae2f7953186adfa6bbeb7" Jan 21 00:58:00 crc kubenswrapper[5118]: I0121 00:58:00.172340 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29482618-95pkp"] Jan 21 00:58:00 crc kubenswrapper[5118]: I0121 00:58:00.177832 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="60b8e072-9e63-4258-a607-54ff40b13e88" containerName="oc" Jan 21 00:58:00 crc kubenswrapper[5118]: I0121 00:58:00.177877 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="60b8e072-9e63-4258-a607-54ff40b13e88" containerName="oc" Jan 21 00:58:00 crc kubenswrapper[5118]: I0121 00:58:00.178195 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="60b8e072-9e63-4258-a607-54ff40b13e88" containerName="oc" Jan 21 00:58:00 crc kubenswrapper[5118]: I0121 00:58:00.198293 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29482618-95pkp"] Jan 21 00:58:00 crc kubenswrapper[5118]: I0121 00:58:00.198541 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482618-95pkp" Jan 21 00:58:00 crc kubenswrapper[5118]: I0121 00:58:00.203417 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 00:58:00 crc kubenswrapper[5118]: I0121 00:58:00.203437 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 00:58:00 crc kubenswrapper[5118]: I0121 00:58:00.204271 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-tk7qc\"" Jan 21 00:58:00 crc kubenswrapper[5118]: I0121 00:58:00.330003 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgkxb\" (UniqueName: \"kubernetes.io/projected/e4f580dc-7513-4504-bf79-07c609dce4a0-kube-api-access-hgkxb\") pod \"auto-csr-approver-29482618-95pkp\" (UID: \"e4f580dc-7513-4504-bf79-07c609dce4a0\") " pod="openshift-infra/auto-csr-approver-29482618-95pkp" Jan 21 00:58:00 crc kubenswrapper[5118]: I0121 00:58:00.431584 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hgkxb\" (UniqueName: \"kubernetes.io/projected/e4f580dc-7513-4504-bf79-07c609dce4a0-kube-api-access-hgkxb\") pod \"auto-csr-approver-29482618-95pkp\" (UID: \"e4f580dc-7513-4504-bf79-07c609dce4a0\") " pod="openshift-infra/auto-csr-approver-29482618-95pkp" Jan 21 00:58:00 crc kubenswrapper[5118]: I0121 00:58:00.459618 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgkxb\" (UniqueName: \"kubernetes.io/projected/e4f580dc-7513-4504-bf79-07c609dce4a0-kube-api-access-hgkxb\") pod \"auto-csr-approver-29482618-95pkp\" (UID: \"e4f580dc-7513-4504-bf79-07c609dce4a0\") " pod="openshift-infra/auto-csr-approver-29482618-95pkp" Jan 21 00:58:00 crc kubenswrapper[5118]: I0121 00:58:00.525812 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482618-95pkp" Jan 21 00:58:00 crc kubenswrapper[5118]: I0121 00:58:00.987385 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29482618-95pkp"] Jan 21 00:58:00 crc kubenswrapper[5118]: W0121 00:58:00.992726 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode4f580dc_7513_4504_bf79_07c609dce4a0.slice/crio-8f59c4f6edb431311429d109068b1a4880b1f6f6fb8373be7922a48a5af48e09 WatchSource:0}: Error finding container 8f59c4f6edb431311429d109068b1a4880b1f6f6fb8373be7922a48a5af48e09: Status 404 returned error can't find the container with id 8f59c4f6edb431311429d109068b1a4880b1f6f6fb8373be7922a48a5af48e09 Jan 21 00:58:01 crc kubenswrapper[5118]: I0121 00:58:01.691138 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29482618-95pkp" event={"ID":"e4f580dc-7513-4504-bf79-07c609dce4a0","Type":"ContainerStarted","Data":"8f59c4f6edb431311429d109068b1a4880b1f6f6fb8373be7922a48a5af48e09"} Jan 21 00:58:02 crc kubenswrapper[5118]: I0121 00:58:02.701530 5118 generic.go:358] "Generic (PLEG): container finished" podID="e4f580dc-7513-4504-bf79-07c609dce4a0" containerID="6045a42860a3d1100bc1d512659d3e12bb0358f846fb34fcb56ed1a5147f86f4" exitCode=0 Jan 21 00:58:02 crc kubenswrapper[5118]: I0121 00:58:02.701711 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29482618-95pkp" event={"ID":"e4f580dc-7513-4504-bf79-07c609dce4a0","Type":"ContainerDied","Data":"6045a42860a3d1100bc1d512659d3e12bb0358f846fb34fcb56ed1a5147f86f4"} Jan 21 00:58:03 crc kubenswrapper[5118]: I0121 00:58:03.802918 5118 patch_prober.go:28] interesting pod/machine-config-daemon-22r9n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 00:58:03 crc kubenswrapper[5118]: I0121 00:58:03.803375 5118 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 00:58:04 crc kubenswrapper[5118]: I0121 00:58:04.019292 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482618-95pkp" Jan 21 00:58:04 crc kubenswrapper[5118]: I0121 00:58:04.196625 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hgkxb\" (UniqueName: \"kubernetes.io/projected/e4f580dc-7513-4504-bf79-07c609dce4a0-kube-api-access-hgkxb\") pod \"e4f580dc-7513-4504-bf79-07c609dce4a0\" (UID: \"e4f580dc-7513-4504-bf79-07c609dce4a0\") " Jan 21 00:58:04 crc kubenswrapper[5118]: I0121 00:58:04.206537 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4f580dc-7513-4504-bf79-07c609dce4a0-kube-api-access-hgkxb" (OuterVolumeSpecName: "kube-api-access-hgkxb") pod "e4f580dc-7513-4504-bf79-07c609dce4a0" (UID: "e4f580dc-7513-4504-bf79-07c609dce4a0"). InnerVolumeSpecName "kube-api-access-hgkxb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:58:04 crc kubenswrapper[5118]: I0121 00:58:04.299263 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hgkxb\" (UniqueName: \"kubernetes.io/projected/e4f580dc-7513-4504-bf79-07c609dce4a0-kube-api-access-hgkxb\") on node \"crc\" DevicePath \"\"" Jan 21 00:58:04 crc kubenswrapper[5118]: I0121 00:58:04.720395 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482618-95pkp" Jan 21 00:58:04 crc kubenswrapper[5118]: I0121 00:58:04.720402 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29482618-95pkp" event={"ID":"e4f580dc-7513-4504-bf79-07c609dce4a0","Type":"ContainerDied","Data":"8f59c4f6edb431311429d109068b1a4880b1f6f6fb8373be7922a48a5af48e09"} Jan 21 00:58:04 crc kubenswrapper[5118]: I0121 00:58:04.720520 5118 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f59c4f6edb431311429d109068b1a4880b1f6f6fb8373be7922a48a5af48e09" Jan 21 00:58:05 crc kubenswrapper[5118]: I0121 00:58:05.111197 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29482612-srpvx"] Jan 21 00:58:05 crc kubenswrapper[5118]: I0121 00:58:05.126683 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29482612-srpvx"] Jan 21 00:58:06 crc kubenswrapper[5118]: I0121 00:58:06.988438 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83c9d4ec-b4fa-4572-a111-bdc4e5afab4f" path="/var/lib/kubelet/pods/83c9d4ec-b4fa-4572-a111-bdc4e5afab4f/volumes" Jan 21 00:58:09 crc kubenswrapper[5118]: I0121 00:58:09.090269 5118 scope.go:117] "RemoveContainer" containerID="782c36dae3e00e012e60f0c30eb0a514b9bdfbaa83b1ec242a7e8a53034f6899" Jan 21 00:58:33 crc kubenswrapper[5118]: I0121 00:58:33.801249 5118 patch_prober.go:28] interesting pod/machine-config-daemon-22r9n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 00:58:33 crc kubenswrapper[5118]: I0121 00:58:33.802201 5118 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 00:59:03 crc kubenswrapper[5118]: I0121 00:59:03.800552 5118 patch_prober.go:28] interesting pod/machine-config-daemon-22r9n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 00:59:03 crc kubenswrapper[5118]: I0121 00:59:03.800991 5118 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 00:59:03 crc kubenswrapper[5118]: I0121 00:59:03.801033 5118 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" Jan 21 00:59:03 crc kubenswrapper[5118]: I0121 00:59:03.801576 5118 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"65522b58c4e66c6b9d619159bfd79bf25c3edfc786d498146b82854125a4f813"} pod="openshift-machine-config-operator/machine-config-daemon-22r9n" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 00:59:03 crc kubenswrapper[5118]: I0121 00:59:03.801629 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" containerName="machine-config-daemon" containerID="cri-o://65522b58c4e66c6b9d619159bfd79bf25c3edfc786d498146b82854125a4f813" gracePeriod=600 Jan 21 00:59:03 crc kubenswrapper[5118]: I0121 00:59:03.944520 5118 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 00:59:04 crc kubenswrapper[5118]: I0121 00:59:04.302596 5118 generic.go:358] "Generic (PLEG): container finished" podID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" containerID="65522b58c4e66c6b9d619159bfd79bf25c3edfc786d498146b82854125a4f813" exitCode=0 Jan 21 00:59:04 crc kubenswrapper[5118]: I0121 00:59:04.302685 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" event={"ID":"44eb9bc7-60a3-421c-bf5e-d1d9a5026435","Type":"ContainerDied","Data":"65522b58c4e66c6b9d619159bfd79bf25c3edfc786d498146b82854125a4f813"} Jan 21 00:59:04 crc kubenswrapper[5118]: I0121 00:59:04.303305 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" event={"ID":"44eb9bc7-60a3-421c-bf5e-d1d9a5026435","Type":"ContainerStarted","Data":"1cfa4853a1d05427b9738ec633779c50d43c028f2c146af21d68ecb205dd62a3"} Jan 21 00:59:04 crc kubenswrapper[5118]: I0121 00:59:04.303351 5118 scope.go:117] "RemoveContainer" containerID="1169db6a3b25445bfb02507cef724d6fd0c2d9da9b3393a9bf487e98f8ffe67f" Jan 21 00:59:06 crc kubenswrapper[5118]: I0121 00:59:06.165587 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qcqwq_7c0390f5-26b4-4299-958c-acac058be619/kube-multus/0.log" Jan 21 00:59:06 crc kubenswrapper[5118]: I0121 00:59:06.165710 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qcqwq_7c0390f5-26b4-4299-958c-acac058be619/kube-multus/0.log" Jan 21 00:59:06 crc kubenswrapper[5118]: I0121 00:59:06.190043 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 21 00:59:06 crc kubenswrapper[5118]: I0121 00:59:06.190632 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 21 00:59:30 crc kubenswrapper[5118]: I0121 00:59:30.635876 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dfqw5"] Jan 21 00:59:30 crc kubenswrapper[5118]: I0121 00:59:30.637802 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e4f580dc-7513-4504-bf79-07c609dce4a0" containerName="oc" Jan 21 00:59:30 crc kubenswrapper[5118]: I0121 00:59:30.637821 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4f580dc-7513-4504-bf79-07c609dce4a0" containerName="oc" Jan 21 00:59:30 crc kubenswrapper[5118]: I0121 00:59:30.638003 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="e4f580dc-7513-4504-bf79-07c609dce4a0" containerName="oc" Jan 21 00:59:30 crc kubenswrapper[5118]: I0121 00:59:30.651138 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dfqw5" Jan 21 00:59:30 crc kubenswrapper[5118]: I0121 00:59:30.655381 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dfqw5"] Jan 21 00:59:30 crc kubenswrapper[5118]: I0121 00:59:30.703772 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4cc3a73-b6ad-4c95-a362-a89317f05631-catalog-content\") pod \"community-operators-dfqw5\" (UID: \"f4cc3a73-b6ad-4c95-a362-a89317f05631\") " pod="openshift-marketplace/community-operators-dfqw5" Jan 21 00:59:30 crc kubenswrapper[5118]: I0121 00:59:30.703859 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4cc3a73-b6ad-4c95-a362-a89317f05631-utilities\") pod \"community-operators-dfqw5\" (UID: \"f4cc3a73-b6ad-4c95-a362-a89317f05631\") " pod="openshift-marketplace/community-operators-dfqw5" Jan 21 00:59:30 crc kubenswrapper[5118]: I0121 00:59:30.703888 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qmvv\" (UniqueName: \"kubernetes.io/projected/f4cc3a73-b6ad-4c95-a362-a89317f05631-kube-api-access-9qmvv\") pod \"community-operators-dfqw5\" (UID: \"f4cc3a73-b6ad-4c95-a362-a89317f05631\") " pod="openshift-marketplace/community-operators-dfqw5" Jan 21 00:59:30 crc kubenswrapper[5118]: I0121 00:59:30.805387 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4cc3a73-b6ad-4c95-a362-a89317f05631-catalog-content\") pod \"community-operators-dfqw5\" (UID: \"f4cc3a73-b6ad-4c95-a362-a89317f05631\") " pod="openshift-marketplace/community-operators-dfqw5" Jan 21 00:59:30 crc kubenswrapper[5118]: I0121 00:59:30.805500 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4cc3a73-b6ad-4c95-a362-a89317f05631-utilities\") pod \"community-operators-dfqw5\" (UID: \"f4cc3a73-b6ad-4c95-a362-a89317f05631\") " pod="openshift-marketplace/community-operators-dfqw5" Jan 21 00:59:30 crc kubenswrapper[5118]: I0121 00:59:30.805519 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9qmvv\" (UniqueName: \"kubernetes.io/projected/f4cc3a73-b6ad-4c95-a362-a89317f05631-kube-api-access-9qmvv\") pod \"community-operators-dfqw5\" (UID: \"f4cc3a73-b6ad-4c95-a362-a89317f05631\") " pod="openshift-marketplace/community-operators-dfqw5" Jan 21 00:59:30 crc kubenswrapper[5118]: I0121 00:59:30.805994 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4cc3a73-b6ad-4c95-a362-a89317f05631-catalog-content\") pod \"community-operators-dfqw5\" (UID: \"f4cc3a73-b6ad-4c95-a362-a89317f05631\") " pod="openshift-marketplace/community-operators-dfqw5" Jan 21 00:59:30 crc kubenswrapper[5118]: I0121 00:59:30.806056 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4cc3a73-b6ad-4c95-a362-a89317f05631-utilities\") pod \"community-operators-dfqw5\" (UID: \"f4cc3a73-b6ad-4c95-a362-a89317f05631\") " pod="openshift-marketplace/community-operators-dfqw5" Jan 21 00:59:30 crc kubenswrapper[5118]: I0121 00:59:30.828524 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qmvv\" (UniqueName: \"kubernetes.io/projected/f4cc3a73-b6ad-4c95-a362-a89317f05631-kube-api-access-9qmvv\") pod \"community-operators-dfqw5\" (UID: \"f4cc3a73-b6ad-4c95-a362-a89317f05631\") " pod="openshift-marketplace/community-operators-dfqw5" Jan 21 00:59:31 crc kubenswrapper[5118]: I0121 00:59:31.014885 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dfqw5" Jan 21 00:59:31 crc kubenswrapper[5118]: I0121 00:59:31.299767 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dfqw5"] Jan 21 00:59:31 crc kubenswrapper[5118]: W0121 00:59:31.301338 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4cc3a73_b6ad_4c95_a362_a89317f05631.slice/crio-537d8ef15329adaccfb89e53ec88bd0799a2d0135d988f4f65d8cb9e4a7c1301 WatchSource:0}: Error finding container 537d8ef15329adaccfb89e53ec88bd0799a2d0135d988f4f65d8cb9e4a7c1301: Status 404 returned error can't find the container with id 537d8ef15329adaccfb89e53ec88bd0799a2d0135d988f4f65d8cb9e4a7c1301 Jan 21 00:59:31 crc kubenswrapper[5118]: I0121 00:59:31.898331 5118 generic.go:358] "Generic (PLEG): container finished" podID="f4cc3a73-b6ad-4c95-a362-a89317f05631" containerID="ea9cff8dab770dec20b856e2e41ceb4c55761b306c5c3eb8a75ca807e8333420" exitCode=0 Jan 21 00:59:31 crc kubenswrapper[5118]: I0121 00:59:31.898380 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dfqw5" event={"ID":"f4cc3a73-b6ad-4c95-a362-a89317f05631","Type":"ContainerDied","Data":"ea9cff8dab770dec20b856e2e41ceb4c55761b306c5c3eb8a75ca807e8333420"} Jan 21 00:59:31 crc kubenswrapper[5118]: I0121 00:59:31.898863 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dfqw5" event={"ID":"f4cc3a73-b6ad-4c95-a362-a89317f05631","Type":"ContainerStarted","Data":"537d8ef15329adaccfb89e53ec88bd0799a2d0135d988f4f65d8cb9e4a7c1301"} Jan 21 00:59:32 crc kubenswrapper[5118]: I0121 00:59:32.910596 5118 generic.go:358] "Generic (PLEG): container finished" podID="f4cc3a73-b6ad-4c95-a362-a89317f05631" containerID="f25283c01e11916c5b7a81f0cdf73415833e87f7ce493e25e64128599c0772f2" exitCode=0 Jan 21 00:59:32 crc kubenswrapper[5118]: I0121 00:59:32.910667 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dfqw5" event={"ID":"f4cc3a73-b6ad-4c95-a362-a89317f05631","Type":"ContainerDied","Data":"f25283c01e11916c5b7a81f0cdf73415833e87f7ce493e25e64128599c0772f2"} Jan 21 00:59:33 crc kubenswrapper[5118]: I0121 00:59:33.922392 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dfqw5" event={"ID":"f4cc3a73-b6ad-4c95-a362-a89317f05631","Type":"ContainerStarted","Data":"34235ea52391d9e1613476b7eb05c4d07181eacf8936db0556c3d4beb9fab310"} Jan 21 00:59:33 crc kubenswrapper[5118]: I0121 00:59:33.960041 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dfqw5" podStartSLOduration=3.361931018 podStartE2EDuration="3.960021755s" podCreationTimestamp="2026-01-21 00:59:30 +0000 UTC" firstStartedPulling="2026-01-21 00:59:31.903536278 +0000 UTC m=+3027.227783336" lastFinishedPulling="2026-01-21 00:59:32.501627045 +0000 UTC m=+3027.825874073" observedRunningTime="2026-01-21 00:59:33.954122032 +0000 UTC m=+3029.278369090" watchObservedRunningTime="2026-01-21 00:59:33.960021755 +0000 UTC m=+3029.284268783" Jan 21 00:59:41 crc kubenswrapper[5118]: I0121 00:59:41.015853 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-dfqw5" Jan 21 00:59:41 crc kubenswrapper[5118]: I0121 00:59:41.018019 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dfqw5" Jan 21 00:59:41 crc kubenswrapper[5118]: I0121 00:59:41.086601 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dfqw5" Jan 21 00:59:42 crc kubenswrapper[5118]: I0121 00:59:42.061603 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dfqw5" Jan 21 00:59:42 crc kubenswrapper[5118]: I0121 00:59:42.133223 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dfqw5"] Jan 21 00:59:44 crc kubenswrapper[5118]: I0121 00:59:44.028069 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-dfqw5" podUID="f4cc3a73-b6ad-4c95-a362-a89317f05631" containerName="registry-server" containerID="cri-o://34235ea52391d9e1613476b7eb05c4d07181eacf8936db0556c3d4beb9fab310" gracePeriod=2 Jan 21 00:59:44 crc kubenswrapper[5118]: I0121 00:59:44.967580 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dfqw5" Jan 21 00:59:45 crc kubenswrapper[5118]: I0121 00:59:45.042985 5118 generic.go:358] "Generic (PLEG): container finished" podID="f4cc3a73-b6ad-4c95-a362-a89317f05631" containerID="34235ea52391d9e1613476b7eb05c4d07181eacf8936db0556c3d4beb9fab310" exitCode=0 Jan 21 00:59:45 crc kubenswrapper[5118]: I0121 00:59:45.043086 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dfqw5" Jan 21 00:59:45 crc kubenswrapper[5118]: I0121 00:59:45.043106 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dfqw5" event={"ID":"f4cc3a73-b6ad-4c95-a362-a89317f05631","Type":"ContainerDied","Data":"34235ea52391d9e1613476b7eb05c4d07181eacf8936db0556c3d4beb9fab310"} Jan 21 00:59:45 crc kubenswrapper[5118]: I0121 00:59:45.044959 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dfqw5" event={"ID":"f4cc3a73-b6ad-4c95-a362-a89317f05631","Type":"ContainerDied","Data":"537d8ef15329adaccfb89e53ec88bd0799a2d0135d988f4f65d8cb9e4a7c1301"} Jan 21 00:59:45 crc kubenswrapper[5118]: I0121 00:59:45.044992 5118 scope.go:117] "RemoveContainer" containerID="34235ea52391d9e1613476b7eb05c4d07181eacf8936db0556c3d4beb9fab310" Jan 21 00:59:45 crc kubenswrapper[5118]: I0121 00:59:45.053422 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4cc3a73-b6ad-4c95-a362-a89317f05631-catalog-content\") pod \"f4cc3a73-b6ad-4c95-a362-a89317f05631\" (UID: \"f4cc3a73-b6ad-4c95-a362-a89317f05631\") " Jan 21 00:59:45 crc kubenswrapper[5118]: I0121 00:59:45.053552 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4cc3a73-b6ad-4c95-a362-a89317f05631-utilities\") pod \"f4cc3a73-b6ad-4c95-a362-a89317f05631\" (UID: \"f4cc3a73-b6ad-4c95-a362-a89317f05631\") " Jan 21 00:59:45 crc kubenswrapper[5118]: I0121 00:59:45.053737 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qmvv\" (UniqueName: \"kubernetes.io/projected/f4cc3a73-b6ad-4c95-a362-a89317f05631-kube-api-access-9qmvv\") pod \"f4cc3a73-b6ad-4c95-a362-a89317f05631\" (UID: \"f4cc3a73-b6ad-4c95-a362-a89317f05631\") " Jan 21 00:59:45 crc kubenswrapper[5118]: I0121 00:59:45.055449 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4cc3a73-b6ad-4c95-a362-a89317f05631-utilities" (OuterVolumeSpecName: "utilities") pod "f4cc3a73-b6ad-4c95-a362-a89317f05631" (UID: "f4cc3a73-b6ad-4c95-a362-a89317f05631"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 00:59:45 crc kubenswrapper[5118]: I0121 00:59:45.065352 5118 scope.go:117] "RemoveContainer" containerID="f25283c01e11916c5b7a81f0cdf73415833e87f7ce493e25e64128599c0772f2" Jan 21 00:59:45 crc kubenswrapper[5118]: I0121 00:59:45.066498 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4cc3a73-b6ad-4c95-a362-a89317f05631-kube-api-access-9qmvv" (OuterVolumeSpecName: "kube-api-access-9qmvv") pod "f4cc3a73-b6ad-4c95-a362-a89317f05631" (UID: "f4cc3a73-b6ad-4c95-a362-a89317f05631"). InnerVolumeSpecName "kube-api-access-9qmvv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:59:45 crc kubenswrapper[5118]: I0121 00:59:45.099315 5118 scope.go:117] "RemoveContainer" containerID="ea9cff8dab770dec20b856e2e41ceb4c55761b306c5c3eb8a75ca807e8333420" Jan 21 00:59:45 crc kubenswrapper[5118]: I0121 00:59:45.120743 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4cc3a73-b6ad-4c95-a362-a89317f05631-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f4cc3a73-b6ad-4c95-a362-a89317f05631" (UID: "f4cc3a73-b6ad-4c95-a362-a89317f05631"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 00:59:45 crc kubenswrapper[5118]: I0121 00:59:45.121344 5118 scope.go:117] "RemoveContainer" containerID="34235ea52391d9e1613476b7eb05c4d07181eacf8936db0556c3d4beb9fab310" Jan 21 00:59:45 crc kubenswrapper[5118]: E0121 00:59:45.122086 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34235ea52391d9e1613476b7eb05c4d07181eacf8936db0556c3d4beb9fab310\": container with ID starting with 34235ea52391d9e1613476b7eb05c4d07181eacf8936db0556c3d4beb9fab310 not found: ID does not exist" containerID="34235ea52391d9e1613476b7eb05c4d07181eacf8936db0556c3d4beb9fab310" Jan 21 00:59:45 crc kubenswrapper[5118]: I0121 00:59:45.122130 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34235ea52391d9e1613476b7eb05c4d07181eacf8936db0556c3d4beb9fab310"} err="failed to get container status \"34235ea52391d9e1613476b7eb05c4d07181eacf8936db0556c3d4beb9fab310\": rpc error: code = NotFound desc = could not find container \"34235ea52391d9e1613476b7eb05c4d07181eacf8936db0556c3d4beb9fab310\": container with ID starting with 34235ea52391d9e1613476b7eb05c4d07181eacf8936db0556c3d4beb9fab310 not found: ID does not exist" Jan 21 00:59:45 crc kubenswrapper[5118]: I0121 00:59:45.122535 5118 scope.go:117] "RemoveContainer" containerID="f25283c01e11916c5b7a81f0cdf73415833e87f7ce493e25e64128599c0772f2" Jan 21 00:59:45 crc kubenswrapper[5118]: E0121 00:59:45.122875 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f25283c01e11916c5b7a81f0cdf73415833e87f7ce493e25e64128599c0772f2\": container with ID starting with f25283c01e11916c5b7a81f0cdf73415833e87f7ce493e25e64128599c0772f2 not found: ID does not exist" containerID="f25283c01e11916c5b7a81f0cdf73415833e87f7ce493e25e64128599c0772f2" Jan 21 00:59:45 crc kubenswrapper[5118]: I0121 00:59:45.122906 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f25283c01e11916c5b7a81f0cdf73415833e87f7ce493e25e64128599c0772f2"} err="failed to get container status \"f25283c01e11916c5b7a81f0cdf73415833e87f7ce493e25e64128599c0772f2\": rpc error: code = NotFound desc = could not find container \"f25283c01e11916c5b7a81f0cdf73415833e87f7ce493e25e64128599c0772f2\": container with ID starting with f25283c01e11916c5b7a81f0cdf73415833e87f7ce493e25e64128599c0772f2 not found: ID does not exist" Jan 21 00:59:45 crc kubenswrapper[5118]: I0121 00:59:45.122924 5118 scope.go:117] "RemoveContainer" containerID="ea9cff8dab770dec20b856e2e41ceb4c55761b306c5c3eb8a75ca807e8333420" Jan 21 00:59:45 crc kubenswrapper[5118]: E0121 00:59:45.123290 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea9cff8dab770dec20b856e2e41ceb4c55761b306c5c3eb8a75ca807e8333420\": container with ID starting with ea9cff8dab770dec20b856e2e41ceb4c55761b306c5c3eb8a75ca807e8333420 not found: ID does not exist" containerID="ea9cff8dab770dec20b856e2e41ceb4c55761b306c5c3eb8a75ca807e8333420" Jan 21 00:59:45 crc kubenswrapper[5118]: I0121 00:59:45.123344 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea9cff8dab770dec20b856e2e41ceb4c55761b306c5c3eb8a75ca807e8333420"} err="failed to get container status \"ea9cff8dab770dec20b856e2e41ceb4c55761b306c5c3eb8a75ca807e8333420\": rpc error: code = NotFound desc = could not find container \"ea9cff8dab770dec20b856e2e41ceb4c55761b306c5c3eb8a75ca807e8333420\": container with ID starting with ea9cff8dab770dec20b856e2e41ceb4c55761b306c5c3eb8a75ca807e8333420 not found: ID does not exist" Jan 21 00:59:45 crc kubenswrapper[5118]: I0121 00:59:45.155383 5118 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4cc3a73-b6ad-4c95-a362-a89317f05631-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 00:59:45 crc kubenswrapper[5118]: I0121 00:59:45.155447 5118 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4cc3a73-b6ad-4c95-a362-a89317f05631-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 00:59:45 crc kubenswrapper[5118]: I0121 00:59:45.155460 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9qmvv\" (UniqueName: \"kubernetes.io/projected/f4cc3a73-b6ad-4c95-a362-a89317f05631-kube-api-access-9qmvv\") on node \"crc\" DevicePath \"\"" Jan 21 00:59:45 crc kubenswrapper[5118]: I0121 00:59:45.385717 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dfqw5"] Jan 21 00:59:45 crc kubenswrapper[5118]: I0121 00:59:45.392712 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-dfqw5"] Jan 21 00:59:46 crc kubenswrapper[5118]: I0121 00:59:46.985908 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4cc3a73-b6ad-4c95-a362-a89317f05631" path="/var/lib/kubelet/pods/f4cc3a73-b6ad-4c95-a362-a89317f05631/volumes" Jan 21 01:00:00 crc kubenswrapper[5118]: I0121 01:00:00.161992 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482620-wf5sl"] Jan 21 01:00:00 crc kubenswrapper[5118]: I0121 01:00:00.163955 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f4cc3a73-b6ad-4c95-a362-a89317f05631" containerName="extract-utilities" Jan 21 01:00:00 crc kubenswrapper[5118]: I0121 01:00:00.163973 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4cc3a73-b6ad-4c95-a362-a89317f05631" containerName="extract-utilities" Jan 21 01:00:00 crc kubenswrapper[5118]: I0121 01:00:00.164001 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f4cc3a73-b6ad-4c95-a362-a89317f05631" containerName="registry-server" Jan 21 01:00:00 crc kubenswrapper[5118]: I0121 01:00:00.164007 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4cc3a73-b6ad-4c95-a362-a89317f05631" containerName="registry-server" Jan 21 01:00:00 crc kubenswrapper[5118]: I0121 01:00:00.164025 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f4cc3a73-b6ad-4c95-a362-a89317f05631" containerName="extract-content" Jan 21 01:00:00 crc kubenswrapper[5118]: I0121 01:00:00.164031 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4cc3a73-b6ad-4c95-a362-a89317f05631" containerName="extract-content" Jan 21 01:00:00 crc kubenswrapper[5118]: I0121 01:00:00.164259 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="f4cc3a73-b6ad-4c95-a362-a89317f05631" containerName="registry-server" Jan 21 01:00:00 crc kubenswrapper[5118]: I0121 01:00:00.171933 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29482620-tsrgt"] Jan 21 01:00:00 crc kubenswrapper[5118]: I0121 01:00:00.172200 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482620-wf5sl" Jan 21 01:00:00 crc kubenswrapper[5118]: I0121 01:00:00.174696 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Jan 21 01:00:00 crc kubenswrapper[5118]: I0121 01:00:00.174911 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Jan 21 01:00:00 crc kubenswrapper[5118]: I0121 01:00:00.179726 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29482620-tsrgt"] Jan 21 01:00:00 crc kubenswrapper[5118]: I0121 01:00:00.179945 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482620-tsrgt" Jan 21 01:00:00 crc kubenswrapper[5118]: I0121 01:00:00.181854 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 01:00:00 crc kubenswrapper[5118]: I0121 01:00:00.182176 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 01:00:00 crc kubenswrapper[5118]: I0121 01:00:00.183842 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-tk7qc\"" Jan 21 01:00:00 crc kubenswrapper[5118]: I0121 01:00:00.187250 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482620-wf5sl"] Jan 21 01:00:00 crc kubenswrapper[5118]: I0121 01:00:00.237994 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c47cbc33-f837-499a-ba25-0671ce688b87-secret-volume\") pod \"collect-profiles-29482620-wf5sl\" (UID: \"c47cbc33-f837-499a-ba25-0671ce688b87\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482620-wf5sl" Jan 21 01:00:00 crc kubenswrapper[5118]: I0121 01:00:00.238649 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c47cbc33-f837-499a-ba25-0671ce688b87-config-volume\") pod \"collect-profiles-29482620-wf5sl\" (UID: \"c47cbc33-f837-499a-ba25-0671ce688b87\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482620-wf5sl" Jan 21 01:00:00 crc kubenswrapper[5118]: I0121 01:00:00.238829 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtmcv\" (UniqueName: \"kubernetes.io/projected/c47cbc33-f837-499a-ba25-0671ce688b87-kube-api-access-jtmcv\") pod \"collect-profiles-29482620-wf5sl\" (UID: \"c47cbc33-f837-499a-ba25-0671ce688b87\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482620-wf5sl" Jan 21 01:00:00 crc kubenswrapper[5118]: I0121 01:00:00.238949 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvdqv\" (UniqueName: \"kubernetes.io/projected/87220d6d-9864-4c68-ad38-36297e615eaf-kube-api-access-vvdqv\") pod \"auto-csr-approver-29482620-tsrgt\" (UID: \"87220d6d-9864-4c68-ad38-36297e615eaf\") " pod="openshift-infra/auto-csr-approver-29482620-tsrgt" Jan 21 01:00:00 crc kubenswrapper[5118]: I0121 01:00:00.341122 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c47cbc33-f837-499a-ba25-0671ce688b87-secret-volume\") pod \"collect-profiles-29482620-wf5sl\" (UID: \"c47cbc33-f837-499a-ba25-0671ce688b87\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482620-wf5sl" Jan 21 01:00:00 crc kubenswrapper[5118]: I0121 01:00:00.341502 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c47cbc33-f837-499a-ba25-0671ce688b87-config-volume\") pod \"collect-profiles-29482620-wf5sl\" (UID: \"c47cbc33-f837-499a-ba25-0671ce688b87\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482620-wf5sl" Jan 21 01:00:00 crc kubenswrapper[5118]: I0121 01:00:00.341702 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jtmcv\" (UniqueName: \"kubernetes.io/projected/c47cbc33-f837-499a-ba25-0671ce688b87-kube-api-access-jtmcv\") pod \"collect-profiles-29482620-wf5sl\" (UID: \"c47cbc33-f837-499a-ba25-0671ce688b87\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482620-wf5sl" Jan 21 01:00:00 crc kubenswrapper[5118]: I0121 01:00:00.342562 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vvdqv\" (UniqueName: \"kubernetes.io/projected/87220d6d-9864-4c68-ad38-36297e615eaf-kube-api-access-vvdqv\") pod \"auto-csr-approver-29482620-tsrgt\" (UID: \"87220d6d-9864-4c68-ad38-36297e615eaf\") " pod="openshift-infra/auto-csr-approver-29482620-tsrgt" Jan 21 01:00:00 crc kubenswrapper[5118]: I0121 01:00:00.346974 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c47cbc33-f837-499a-ba25-0671ce688b87-config-volume\") pod \"collect-profiles-29482620-wf5sl\" (UID: \"c47cbc33-f837-499a-ba25-0671ce688b87\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482620-wf5sl" Jan 21 01:00:00 crc kubenswrapper[5118]: I0121 01:00:00.351498 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c47cbc33-f837-499a-ba25-0671ce688b87-secret-volume\") pod \"collect-profiles-29482620-wf5sl\" (UID: \"c47cbc33-f837-499a-ba25-0671ce688b87\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482620-wf5sl" Jan 21 01:00:00 crc kubenswrapper[5118]: I0121 01:00:00.366353 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvdqv\" (UniqueName: \"kubernetes.io/projected/87220d6d-9864-4c68-ad38-36297e615eaf-kube-api-access-vvdqv\") pod \"auto-csr-approver-29482620-tsrgt\" (UID: \"87220d6d-9864-4c68-ad38-36297e615eaf\") " pod="openshift-infra/auto-csr-approver-29482620-tsrgt" Jan 21 01:00:00 crc kubenswrapper[5118]: I0121 01:00:00.368043 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtmcv\" (UniqueName: \"kubernetes.io/projected/c47cbc33-f837-499a-ba25-0671ce688b87-kube-api-access-jtmcv\") pod \"collect-profiles-29482620-wf5sl\" (UID: \"c47cbc33-f837-499a-ba25-0671ce688b87\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482620-wf5sl" Jan 21 01:00:00 crc kubenswrapper[5118]: I0121 01:00:00.500209 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482620-wf5sl" Jan 21 01:00:00 crc kubenswrapper[5118]: I0121 01:00:00.509669 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482620-tsrgt" Jan 21 01:00:00 crc kubenswrapper[5118]: I0121 01:00:00.951846 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29482620-tsrgt"] Jan 21 01:00:00 crc kubenswrapper[5118]: I0121 01:00:00.984596 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482620-wf5sl"] Jan 21 01:00:01 crc kubenswrapper[5118]: I0121 01:00:01.218389 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29482620-tsrgt" event={"ID":"87220d6d-9864-4c68-ad38-36297e615eaf","Type":"ContainerStarted","Data":"ddc923f2dacb677030ba91d83231af04a492964491f1dae7552b903846a283a5"} Jan 21 01:00:01 crc kubenswrapper[5118]: I0121 01:00:01.223904 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482620-wf5sl" event={"ID":"c47cbc33-f837-499a-ba25-0671ce688b87","Type":"ContainerStarted","Data":"ff4bfff74b5d3e60c89702e3cd584addd0c9e0e062d3980d18b49d648bb6026e"} Jan 21 01:00:01 crc kubenswrapper[5118]: I0121 01:00:01.224742 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482620-wf5sl" event={"ID":"c47cbc33-f837-499a-ba25-0671ce688b87","Type":"ContainerStarted","Data":"c9c9c2c1953403e8bc62ae51d5c333f828a2043397dbb1b26149af35b888d5c0"} Jan 21 01:00:01 crc kubenswrapper[5118]: I0121 01:00:01.242309 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29482620-wf5sl" podStartSLOduration=1.242288032 podStartE2EDuration="1.242288032s" podCreationTimestamp="2026-01-21 01:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 01:00:01.240752952 +0000 UTC m=+3056.565000020" watchObservedRunningTime="2026-01-21 01:00:01.242288032 +0000 UTC m=+3056.566535050" Jan 21 01:00:02 crc kubenswrapper[5118]: I0121 01:00:02.233967 5118 generic.go:358] "Generic (PLEG): container finished" podID="c47cbc33-f837-499a-ba25-0671ce688b87" containerID="ff4bfff74b5d3e60c89702e3cd584addd0c9e0e062d3980d18b49d648bb6026e" exitCode=0 Jan 21 01:00:02 crc kubenswrapper[5118]: I0121 01:00:02.234029 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482620-wf5sl" event={"ID":"c47cbc33-f837-499a-ba25-0671ce688b87","Type":"ContainerDied","Data":"ff4bfff74b5d3e60c89702e3cd584addd0c9e0e062d3980d18b49d648bb6026e"} Jan 21 01:00:03 crc kubenswrapper[5118]: I0121 01:00:03.551563 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482620-wf5sl" Jan 21 01:00:03 crc kubenswrapper[5118]: I0121 01:00:03.702344 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c47cbc33-f837-499a-ba25-0671ce688b87-secret-volume\") pod \"c47cbc33-f837-499a-ba25-0671ce688b87\" (UID: \"c47cbc33-f837-499a-ba25-0671ce688b87\") " Jan 21 01:00:03 crc kubenswrapper[5118]: I0121 01:00:03.702499 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c47cbc33-f837-499a-ba25-0671ce688b87-config-volume\") pod \"c47cbc33-f837-499a-ba25-0671ce688b87\" (UID: \"c47cbc33-f837-499a-ba25-0671ce688b87\") " Jan 21 01:00:03 crc kubenswrapper[5118]: I0121 01:00:03.702537 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jtmcv\" (UniqueName: \"kubernetes.io/projected/c47cbc33-f837-499a-ba25-0671ce688b87-kube-api-access-jtmcv\") pod \"c47cbc33-f837-499a-ba25-0671ce688b87\" (UID: \"c47cbc33-f837-499a-ba25-0671ce688b87\") " Jan 21 01:00:03 crc kubenswrapper[5118]: I0121 01:00:03.704012 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c47cbc33-f837-499a-ba25-0671ce688b87-config-volume" (OuterVolumeSpecName: "config-volume") pod "c47cbc33-f837-499a-ba25-0671ce688b87" (UID: "c47cbc33-f837-499a-ba25-0671ce688b87"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 01:00:03 crc kubenswrapper[5118]: I0121 01:00:03.708143 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c47cbc33-f837-499a-ba25-0671ce688b87-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c47cbc33-f837-499a-ba25-0671ce688b87" (UID: "c47cbc33-f837-499a-ba25-0671ce688b87"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 01:00:03 crc kubenswrapper[5118]: I0121 01:00:03.709313 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c47cbc33-f837-499a-ba25-0671ce688b87-kube-api-access-jtmcv" (OuterVolumeSpecName: "kube-api-access-jtmcv") pod "c47cbc33-f837-499a-ba25-0671ce688b87" (UID: "c47cbc33-f837-499a-ba25-0671ce688b87"). InnerVolumeSpecName "kube-api-access-jtmcv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 01:00:03 crc kubenswrapper[5118]: I0121 01:00:03.804314 5118 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c47cbc33-f837-499a-ba25-0671ce688b87-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 01:00:03 crc kubenswrapper[5118]: I0121 01:00:03.804346 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jtmcv\" (UniqueName: \"kubernetes.io/projected/c47cbc33-f837-499a-ba25-0671ce688b87-kube-api-access-jtmcv\") on node \"crc\" DevicePath \"\"" Jan 21 01:00:03 crc kubenswrapper[5118]: I0121 01:00:03.804357 5118 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c47cbc33-f837-499a-ba25-0671ce688b87-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 01:00:04 crc kubenswrapper[5118]: I0121 01:00:04.287507 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482620-wf5sl" event={"ID":"c47cbc33-f837-499a-ba25-0671ce688b87","Type":"ContainerDied","Data":"c9c9c2c1953403e8bc62ae51d5c333f828a2043397dbb1b26149af35b888d5c0"} Jan 21 01:00:04 crc kubenswrapper[5118]: I0121 01:00:04.287560 5118 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c9c9c2c1953403e8bc62ae51d5c333f828a2043397dbb1b26149af35b888d5c0" Jan 21 01:00:04 crc kubenswrapper[5118]: I0121 01:00:04.287530 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482620-wf5sl" Jan 21 01:00:04 crc kubenswrapper[5118]: I0121 01:00:04.337040 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482575-57nmh"] Jan 21 01:00:04 crc kubenswrapper[5118]: I0121 01:00:04.341478 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482575-57nmh"] Jan 21 01:00:04 crc kubenswrapper[5118]: I0121 01:00:04.987373 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44ae8c4f-2bd0-4fbb-ba91-007c932ee1bc" path="/var/lib/kubelet/pods/44ae8c4f-2bd0-4fbb-ba91-007c932ee1bc/volumes" Jan 21 01:00:08 crc kubenswrapper[5118]: I0121 01:00:08.328791 5118 generic.go:358] "Generic (PLEG): container finished" podID="87220d6d-9864-4c68-ad38-36297e615eaf" containerID="eefd4482beaa802a6f72f822bba50fc71b003d3af6c528df379e23aa03640bc1" exitCode=0 Jan 21 01:00:08 crc kubenswrapper[5118]: I0121 01:00:08.328893 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29482620-tsrgt" event={"ID":"87220d6d-9864-4c68-ad38-36297e615eaf","Type":"ContainerDied","Data":"eefd4482beaa802a6f72f822bba50fc71b003d3af6c528df379e23aa03640bc1"} Jan 21 01:00:09 crc kubenswrapper[5118]: I0121 01:00:09.267298 5118 scope.go:117] "RemoveContainer" containerID="5b0ced5c5522c08ad128a61d92edadfd696512390ea9a6bedb16395d6bbb4a3d" Jan 21 01:00:09 crc kubenswrapper[5118]: I0121 01:00:09.586246 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482620-tsrgt" Jan 21 01:00:09 crc kubenswrapper[5118]: I0121 01:00:09.599683 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vvdqv\" (UniqueName: \"kubernetes.io/projected/87220d6d-9864-4c68-ad38-36297e615eaf-kube-api-access-vvdqv\") pod \"87220d6d-9864-4c68-ad38-36297e615eaf\" (UID: \"87220d6d-9864-4c68-ad38-36297e615eaf\") " Jan 21 01:00:09 crc kubenswrapper[5118]: I0121 01:00:09.608564 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87220d6d-9864-4c68-ad38-36297e615eaf-kube-api-access-vvdqv" (OuterVolumeSpecName: "kube-api-access-vvdqv") pod "87220d6d-9864-4c68-ad38-36297e615eaf" (UID: "87220d6d-9864-4c68-ad38-36297e615eaf"). InnerVolumeSpecName "kube-api-access-vvdqv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 01:00:09 crc kubenswrapper[5118]: I0121 01:00:09.701404 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vvdqv\" (UniqueName: \"kubernetes.io/projected/87220d6d-9864-4c68-ad38-36297e615eaf-kube-api-access-vvdqv\") on node \"crc\" DevicePath \"\"" Jan 21 01:00:10 crc kubenswrapper[5118]: I0121 01:00:10.351437 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29482620-tsrgt" event={"ID":"87220d6d-9864-4c68-ad38-36297e615eaf","Type":"ContainerDied","Data":"ddc923f2dacb677030ba91d83231af04a492964491f1dae7552b903846a283a5"} Jan 21 01:00:10 crc kubenswrapper[5118]: I0121 01:00:10.351495 5118 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ddc923f2dacb677030ba91d83231af04a492964491f1dae7552b903846a283a5" Jan 21 01:00:10 crc kubenswrapper[5118]: I0121 01:00:10.351492 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482620-tsrgt" Jan 21 01:00:10 crc kubenswrapper[5118]: I0121 01:00:10.662882 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29482614-t5wjm"] Jan 21 01:00:10 crc kubenswrapper[5118]: I0121 01:00:10.670306 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29482614-t5wjm"] Jan 21 01:00:10 crc kubenswrapper[5118]: I0121 01:00:10.988194 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46899737-0ca5-423e-bf0c-8f057c937e33" path="/var/lib/kubelet/pods/46899737-0ca5-423e-bf0c-8f057c937e33/volumes" Jan 21 01:00:18 crc kubenswrapper[5118]: I0121 01:00:18.000358 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-pd4rw"] Jan 21 01:00:18 crc kubenswrapper[5118]: I0121 01:00:18.001802 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="87220d6d-9864-4c68-ad38-36297e615eaf" containerName="oc" Jan 21 01:00:18 crc kubenswrapper[5118]: I0121 01:00:18.001817 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="87220d6d-9864-4c68-ad38-36297e615eaf" containerName="oc" Jan 21 01:00:18 crc kubenswrapper[5118]: I0121 01:00:18.001838 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c47cbc33-f837-499a-ba25-0671ce688b87" containerName="collect-profiles" Jan 21 01:00:18 crc kubenswrapper[5118]: I0121 01:00:18.001846 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="c47cbc33-f837-499a-ba25-0671ce688b87" containerName="collect-profiles" Jan 21 01:00:18 crc kubenswrapper[5118]: I0121 01:00:18.002039 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="c47cbc33-f837-499a-ba25-0671ce688b87" containerName="collect-profiles" Jan 21 01:00:18 crc kubenswrapper[5118]: I0121 01:00:18.002053 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="87220d6d-9864-4c68-ad38-36297e615eaf" containerName="oc" Jan 21 01:00:18 crc kubenswrapper[5118]: I0121 01:00:18.028418 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pd4rw" Jan 21 01:00:18 crc kubenswrapper[5118]: I0121 01:00:18.034009 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pd4rw"] Jan 21 01:00:18 crc kubenswrapper[5118]: I0121 01:00:18.145127 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rkkl\" (UniqueName: \"kubernetes.io/projected/17c2a6b7-5e22-449b-b77b-0cbf5b95cc29-kube-api-access-6rkkl\") pod \"certified-operators-pd4rw\" (UID: \"17c2a6b7-5e22-449b-b77b-0cbf5b95cc29\") " pod="openshift-marketplace/certified-operators-pd4rw" Jan 21 01:00:18 crc kubenswrapper[5118]: I0121 01:00:18.145239 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17c2a6b7-5e22-449b-b77b-0cbf5b95cc29-utilities\") pod \"certified-operators-pd4rw\" (UID: \"17c2a6b7-5e22-449b-b77b-0cbf5b95cc29\") " pod="openshift-marketplace/certified-operators-pd4rw" Jan 21 01:00:18 crc kubenswrapper[5118]: I0121 01:00:18.145314 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17c2a6b7-5e22-449b-b77b-0cbf5b95cc29-catalog-content\") pod \"certified-operators-pd4rw\" (UID: \"17c2a6b7-5e22-449b-b77b-0cbf5b95cc29\") " pod="openshift-marketplace/certified-operators-pd4rw" Jan 21 01:00:18 crc kubenswrapper[5118]: I0121 01:00:18.246396 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6rkkl\" (UniqueName: \"kubernetes.io/projected/17c2a6b7-5e22-449b-b77b-0cbf5b95cc29-kube-api-access-6rkkl\") pod \"certified-operators-pd4rw\" (UID: \"17c2a6b7-5e22-449b-b77b-0cbf5b95cc29\") " pod="openshift-marketplace/certified-operators-pd4rw" Jan 21 01:00:18 crc kubenswrapper[5118]: I0121 01:00:18.246457 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17c2a6b7-5e22-449b-b77b-0cbf5b95cc29-utilities\") pod \"certified-operators-pd4rw\" (UID: \"17c2a6b7-5e22-449b-b77b-0cbf5b95cc29\") " pod="openshift-marketplace/certified-operators-pd4rw" Jan 21 01:00:18 crc kubenswrapper[5118]: I0121 01:00:18.246515 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17c2a6b7-5e22-449b-b77b-0cbf5b95cc29-catalog-content\") pod \"certified-operators-pd4rw\" (UID: \"17c2a6b7-5e22-449b-b77b-0cbf5b95cc29\") " pod="openshift-marketplace/certified-operators-pd4rw" Jan 21 01:00:18 crc kubenswrapper[5118]: I0121 01:00:18.246992 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17c2a6b7-5e22-449b-b77b-0cbf5b95cc29-catalog-content\") pod \"certified-operators-pd4rw\" (UID: \"17c2a6b7-5e22-449b-b77b-0cbf5b95cc29\") " pod="openshift-marketplace/certified-operators-pd4rw" Jan 21 01:00:18 crc kubenswrapper[5118]: I0121 01:00:18.247228 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17c2a6b7-5e22-449b-b77b-0cbf5b95cc29-utilities\") pod \"certified-operators-pd4rw\" (UID: \"17c2a6b7-5e22-449b-b77b-0cbf5b95cc29\") " pod="openshift-marketplace/certified-operators-pd4rw" Jan 21 01:00:18 crc kubenswrapper[5118]: I0121 01:00:18.272791 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rkkl\" (UniqueName: \"kubernetes.io/projected/17c2a6b7-5e22-449b-b77b-0cbf5b95cc29-kube-api-access-6rkkl\") pod \"certified-operators-pd4rw\" (UID: \"17c2a6b7-5e22-449b-b77b-0cbf5b95cc29\") " pod="openshift-marketplace/certified-operators-pd4rw" Jan 21 01:00:18 crc kubenswrapper[5118]: I0121 01:00:18.362277 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pd4rw" Jan 21 01:00:18 crc kubenswrapper[5118]: I0121 01:00:18.622316 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pd4rw"] Jan 21 01:00:18 crc kubenswrapper[5118]: W0121 01:00:18.631278 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod17c2a6b7_5e22_449b_b77b_0cbf5b95cc29.slice/crio-72d5d57f4cbb2321dbfc14e362d745dcfff202408e887803ee99d0bcd2688f19 WatchSource:0}: Error finding container 72d5d57f4cbb2321dbfc14e362d745dcfff202408e887803ee99d0bcd2688f19: Status 404 returned error can't find the container with id 72d5d57f4cbb2321dbfc14e362d745dcfff202408e887803ee99d0bcd2688f19 Jan 21 01:00:19 crc kubenswrapper[5118]: I0121 01:00:19.470425 5118 generic.go:358] "Generic (PLEG): container finished" podID="17c2a6b7-5e22-449b-b77b-0cbf5b95cc29" containerID="d3ba425f663b41b0d2db3e3b3ca1750167b72452d7028405af728a568b39e7e2" exitCode=0 Jan 21 01:00:19 crc kubenswrapper[5118]: I0121 01:00:19.470927 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pd4rw" event={"ID":"17c2a6b7-5e22-449b-b77b-0cbf5b95cc29","Type":"ContainerDied","Data":"d3ba425f663b41b0d2db3e3b3ca1750167b72452d7028405af728a568b39e7e2"} Jan 21 01:00:19 crc kubenswrapper[5118]: I0121 01:00:19.470984 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pd4rw" event={"ID":"17c2a6b7-5e22-449b-b77b-0cbf5b95cc29","Type":"ContainerStarted","Data":"72d5d57f4cbb2321dbfc14e362d745dcfff202408e887803ee99d0bcd2688f19"} Jan 21 01:00:35 crc kubenswrapper[5118]: I0121 01:00:35.787154 5118 generic.go:358] "Generic (PLEG): container finished" podID="17c2a6b7-5e22-449b-b77b-0cbf5b95cc29" containerID="5df9fbf550978d471cbaea4c7a6d620cee4bf970742aaf25e31f275403137024" exitCode=0 Jan 21 01:00:35 crc kubenswrapper[5118]: I0121 01:00:35.787230 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pd4rw" event={"ID":"17c2a6b7-5e22-449b-b77b-0cbf5b95cc29","Type":"ContainerDied","Data":"5df9fbf550978d471cbaea4c7a6d620cee4bf970742aaf25e31f275403137024"} Jan 21 01:00:36 crc kubenswrapper[5118]: I0121 01:00:36.796742 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pd4rw" event={"ID":"17c2a6b7-5e22-449b-b77b-0cbf5b95cc29","Type":"ContainerStarted","Data":"da9ee57df53a8c7bb66465c4b4b9327f5933a7764e20dd44496a35782e85ab2f"} Jan 21 01:00:36 crc kubenswrapper[5118]: I0121 01:00:36.817590 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-pd4rw" podStartSLOduration=4.075125558 podStartE2EDuration="19.817571252s" podCreationTimestamp="2026-01-21 01:00:17 +0000 UTC" firstStartedPulling="2026-01-21 01:00:19.473432676 +0000 UTC m=+3074.797679694" lastFinishedPulling="2026-01-21 01:00:35.21587837 +0000 UTC m=+3090.540125388" observedRunningTime="2026-01-21 01:00:36.815270452 +0000 UTC m=+3092.139517470" watchObservedRunningTime="2026-01-21 01:00:36.817571252 +0000 UTC m=+3092.141818270" Jan 21 01:00:38 crc kubenswrapper[5118]: I0121 01:00:38.363802 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-pd4rw" Jan 21 01:00:38 crc kubenswrapper[5118]: I0121 01:00:38.364389 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-pd4rw" Jan 21 01:00:38 crc kubenswrapper[5118]: I0121 01:00:38.422939 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-pd4rw" Jan 21 01:00:49 crc kubenswrapper[5118]: I0121 01:00:49.899075 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-pd4rw" Jan 21 01:00:49 crc kubenswrapper[5118]: I0121 01:00:49.969189 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pd4rw"] Jan 21 01:00:49 crc kubenswrapper[5118]: I0121 01:00:49.969663 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-pd4rw" podUID="17c2a6b7-5e22-449b-b77b-0cbf5b95cc29" containerName="registry-server" containerID="cri-o://da9ee57df53a8c7bb66465c4b4b9327f5933a7764e20dd44496a35782e85ab2f" gracePeriod=2 Jan 21 01:00:50 crc kubenswrapper[5118]: I0121 01:00:50.357925 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pd4rw" Jan 21 01:00:50 crc kubenswrapper[5118]: I0121 01:00:50.457313 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17c2a6b7-5e22-449b-b77b-0cbf5b95cc29-utilities\") pod \"17c2a6b7-5e22-449b-b77b-0cbf5b95cc29\" (UID: \"17c2a6b7-5e22-449b-b77b-0cbf5b95cc29\") " Jan 21 01:00:50 crc kubenswrapper[5118]: I0121 01:00:50.457487 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17c2a6b7-5e22-449b-b77b-0cbf5b95cc29-catalog-content\") pod \"17c2a6b7-5e22-449b-b77b-0cbf5b95cc29\" (UID: \"17c2a6b7-5e22-449b-b77b-0cbf5b95cc29\") " Jan 21 01:00:50 crc kubenswrapper[5118]: I0121 01:00:50.457599 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rkkl\" (UniqueName: \"kubernetes.io/projected/17c2a6b7-5e22-449b-b77b-0cbf5b95cc29-kube-api-access-6rkkl\") pod \"17c2a6b7-5e22-449b-b77b-0cbf5b95cc29\" (UID: \"17c2a6b7-5e22-449b-b77b-0cbf5b95cc29\") " Jan 21 01:00:50 crc kubenswrapper[5118]: I0121 01:00:50.458412 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17c2a6b7-5e22-449b-b77b-0cbf5b95cc29-utilities" (OuterVolumeSpecName: "utilities") pod "17c2a6b7-5e22-449b-b77b-0cbf5b95cc29" (UID: "17c2a6b7-5e22-449b-b77b-0cbf5b95cc29"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 01:00:50 crc kubenswrapper[5118]: I0121 01:00:50.464754 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17c2a6b7-5e22-449b-b77b-0cbf5b95cc29-kube-api-access-6rkkl" (OuterVolumeSpecName: "kube-api-access-6rkkl") pod "17c2a6b7-5e22-449b-b77b-0cbf5b95cc29" (UID: "17c2a6b7-5e22-449b-b77b-0cbf5b95cc29"). InnerVolumeSpecName "kube-api-access-6rkkl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 01:00:50 crc kubenswrapper[5118]: I0121 01:00:50.497539 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17c2a6b7-5e22-449b-b77b-0cbf5b95cc29-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "17c2a6b7-5e22-449b-b77b-0cbf5b95cc29" (UID: "17c2a6b7-5e22-449b-b77b-0cbf5b95cc29"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 01:00:50 crc kubenswrapper[5118]: I0121 01:00:50.559763 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6rkkl\" (UniqueName: \"kubernetes.io/projected/17c2a6b7-5e22-449b-b77b-0cbf5b95cc29-kube-api-access-6rkkl\") on node \"crc\" DevicePath \"\"" Jan 21 01:00:50 crc kubenswrapper[5118]: I0121 01:00:50.559807 5118 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17c2a6b7-5e22-449b-b77b-0cbf5b95cc29-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 01:00:50 crc kubenswrapper[5118]: I0121 01:00:50.559820 5118 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17c2a6b7-5e22-449b-b77b-0cbf5b95cc29-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 01:00:50 crc kubenswrapper[5118]: I0121 01:00:50.944818 5118 generic.go:358] "Generic (PLEG): container finished" podID="17c2a6b7-5e22-449b-b77b-0cbf5b95cc29" containerID="da9ee57df53a8c7bb66465c4b4b9327f5933a7764e20dd44496a35782e85ab2f" exitCode=0 Jan 21 01:00:50 crc kubenswrapper[5118]: I0121 01:00:50.945034 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pd4rw" Jan 21 01:00:50 crc kubenswrapper[5118]: I0121 01:00:50.945035 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pd4rw" event={"ID":"17c2a6b7-5e22-449b-b77b-0cbf5b95cc29","Type":"ContainerDied","Data":"da9ee57df53a8c7bb66465c4b4b9327f5933a7764e20dd44496a35782e85ab2f"} Jan 21 01:00:50 crc kubenswrapper[5118]: I0121 01:00:50.945255 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pd4rw" event={"ID":"17c2a6b7-5e22-449b-b77b-0cbf5b95cc29","Type":"ContainerDied","Data":"72d5d57f4cbb2321dbfc14e362d745dcfff202408e887803ee99d0bcd2688f19"} Jan 21 01:00:50 crc kubenswrapper[5118]: I0121 01:00:50.945366 5118 scope.go:117] "RemoveContainer" containerID="da9ee57df53a8c7bb66465c4b4b9327f5933a7764e20dd44496a35782e85ab2f" Jan 21 01:00:51 crc kubenswrapper[5118]: I0121 01:00:50.999580 5118 scope.go:117] "RemoveContainer" containerID="5df9fbf550978d471cbaea4c7a6d620cee4bf970742aaf25e31f275403137024" Jan 21 01:00:51 crc kubenswrapper[5118]: I0121 01:00:50.999774 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pd4rw"] Jan 21 01:00:51 crc kubenswrapper[5118]: I0121 01:00:51.010471 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-pd4rw"] Jan 21 01:00:51 crc kubenswrapper[5118]: I0121 01:00:51.039202 5118 scope.go:117] "RemoveContainer" containerID="d3ba425f663b41b0d2db3e3b3ca1750167b72452d7028405af728a568b39e7e2" Jan 21 01:00:51 crc kubenswrapper[5118]: I0121 01:00:51.066019 5118 scope.go:117] "RemoveContainer" containerID="da9ee57df53a8c7bb66465c4b4b9327f5933a7764e20dd44496a35782e85ab2f" Jan 21 01:00:51 crc kubenswrapper[5118]: E0121 01:00:51.066416 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da9ee57df53a8c7bb66465c4b4b9327f5933a7764e20dd44496a35782e85ab2f\": container with ID starting with da9ee57df53a8c7bb66465c4b4b9327f5933a7764e20dd44496a35782e85ab2f not found: ID does not exist" containerID="da9ee57df53a8c7bb66465c4b4b9327f5933a7764e20dd44496a35782e85ab2f" Jan 21 01:00:51 crc kubenswrapper[5118]: I0121 01:00:51.066495 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da9ee57df53a8c7bb66465c4b4b9327f5933a7764e20dd44496a35782e85ab2f"} err="failed to get container status \"da9ee57df53a8c7bb66465c4b4b9327f5933a7764e20dd44496a35782e85ab2f\": rpc error: code = NotFound desc = could not find container \"da9ee57df53a8c7bb66465c4b4b9327f5933a7764e20dd44496a35782e85ab2f\": container with ID starting with da9ee57df53a8c7bb66465c4b4b9327f5933a7764e20dd44496a35782e85ab2f not found: ID does not exist" Jan 21 01:00:51 crc kubenswrapper[5118]: I0121 01:00:51.066556 5118 scope.go:117] "RemoveContainer" containerID="5df9fbf550978d471cbaea4c7a6d620cee4bf970742aaf25e31f275403137024" Jan 21 01:00:51 crc kubenswrapper[5118]: E0121 01:00:51.067394 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5df9fbf550978d471cbaea4c7a6d620cee4bf970742aaf25e31f275403137024\": container with ID starting with 5df9fbf550978d471cbaea4c7a6d620cee4bf970742aaf25e31f275403137024 not found: ID does not exist" containerID="5df9fbf550978d471cbaea4c7a6d620cee4bf970742aaf25e31f275403137024" Jan 21 01:00:51 crc kubenswrapper[5118]: I0121 01:00:51.067428 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5df9fbf550978d471cbaea4c7a6d620cee4bf970742aaf25e31f275403137024"} err="failed to get container status \"5df9fbf550978d471cbaea4c7a6d620cee4bf970742aaf25e31f275403137024\": rpc error: code = NotFound desc = could not find container \"5df9fbf550978d471cbaea4c7a6d620cee4bf970742aaf25e31f275403137024\": container with ID starting with 5df9fbf550978d471cbaea4c7a6d620cee4bf970742aaf25e31f275403137024 not found: ID does not exist" Jan 21 01:00:51 crc kubenswrapper[5118]: I0121 01:00:51.067450 5118 scope.go:117] "RemoveContainer" containerID="d3ba425f663b41b0d2db3e3b3ca1750167b72452d7028405af728a568b39e7e2" Jan 21 01:00:51 crc kubenswrapper[5118]: E0121 01:00:51.067700 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3ba425f663b41b0d2db3e3b3ca1750167b72452d7028405af728a568b39e7e2\": container with ID starting with d3ba425f663b41b0d2db3e3b3ca1750167b72452d7028405af728a568b39e7e2 not found: ID does not exist" containerID="d3ba425f663b41b0d2db3e3b3ca1750167b72452d7028405af728a568b39e7e2" Jan 21 01:00:51 crc kubenswrapper[5118]: I0121 01:00:51.067808 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3ba425f663b41b0d2db3e3b3ca1750167b72452d7028405af728a568b39e7e2"} err="failed to get container status \"d3ba425f663b41b0d2db3e3b3ca1750167b72452d7028405af728a568b39e7e2\": rpc error: code = NotFound desc = could not find container \"d3ba425f663b41b0d2db3e3b3ca1750167b72452d7028405af728a568b39e7e2\": container with ID starting with d3ba425f663b41b0d2db3e3b3ca1750167b72452d7028405af728a568b39e7e2 not found: ID does not exist" Jan 21 01:00:52 crc kubenswrapper[5118]: I0121 01:00:52.989304 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17c2a6b7-5e22-449b-b77b-0cbf5b95cc29" path="/var/lib/kubelet/pods/17c2a6b7-5e22-449b-b77b-0cbf5b95cc29/volumes" Jan 21 01:01:09 crc kubenswrapper[5118]: I0121 01:01:09.338724 5118 scope.go:117] "RemoveContainer" containerID="d4787b6abdc6b03481491be9acb58eceab56d36eae34995a3360cc206aeff10a" Jan 21 01:01:33 crc kubenswrapper[5118]: I0121 01:01:33.801635 5118 patch_prober.go:28] interesting pod/machine-config-daemon-22r9n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 01:01:33 crc kubenswrapper[5118]: I0121 01:01:33.802415 5118 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 01:02:00 crc kubenswrapper[5118]: I0121 01:02:00.184253 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29482622-pxxb6"] Jan 21 01:02:00 crc kubenswrapper[5118]: I0121 01:02:00.185398 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="17c2a6b7-5e22-449b-b77b-0cbf5b95cc29" containerName="extract-content" Jan 21 01:02:00 crc kubenswrapper[5118]: I0121 01:02:00.185410 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="17c2a6b7-5e22-449b-b77b-0cbf5b95cc29" containerName="extract-content" Jan 21 01:02:00 crc kubenswrapper[5118]: I0121 01:02:00.185438 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="17c2a6b7-5e22-449b-b77b-0cbf5b95cc29" containerName="extract-utilities" Jan 21 01:02:00 crc kubenswrapper[5118]: I0121 01:02:00.185443 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="17c2a6b7-5e22-449b-b77b-0cbf5b95cc29" containerName="extract-utilities" Jan 21 01:02:00 crc kubenswrapper[5118]: I0121 01:02:00.185452 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="17c2a6b7-5e22-449b-b77b-0cbf5b95cc29" containerName="registry-server" Jan 21 01:02:00 crc kubenswrapper[5118]: I0121 01:02:00.185457 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="17c2a6b7-5e22-449b-b77b-0cbf5b95cc29" containerName="registry-server" Jan 21 01:02:00 crc kubenswrapper[5118]: I0121 01:02:00.185562 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="17c2a6b7-5e22-449b-b77b-0cbf5b95cc29" containerName="registry-server" Jan 21 01:02:00 crc kubenswrapper[5118]: I0121 01:02:00.194947 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482622-pxxb6" Jan 21 01:02:00 crc kubenswrapper[5118]: I0121 01:02:00.196654 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29482622-pxxb6"] Jan 21 01:02:00 crc kubenswrapper[5118]: I0121 01:02:00.198019 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 01:02:00 crc kubenswrapper[5118]: I0121 01:02:00.198348 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-tk7qc\"" Jan 21 01:02:00 crc kubenswrapper[5118]: I0121 01:02:00.199642 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 01:02:00 crc kubenswrapper[5118]: I0121 01:02:00.281207 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pr48x\" (UniqueName: \"kubernetes.io/projected/c7245025-2fca-484c-b55a-431b33de7097-kube-api-access-pr48x\") pod \"auto-csr-approver-29482622-pxxb6\" (UID: \"c7245025-2fca-484c-b55a-431b33de7097\") " pod="openshift-infra/auto-csr-approver-29482622-pxxb6" Jan 21 01:02:00 crc kubenswrapper[5118]: I0121 01:02:00.382345 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pr48x\" (UniqueName: \"kubernetes.io/projected/c7245025-2fca-484c-b55a-431b33de7097-kube-api-access-pr48x\") pod \"auto-csr-approver-29482622-pxxb6\" (UID: \"c7245025-2fca-484c-b55a-431b33de7097\") " pod="openshift-infra/auto-csr-approver-29482622-pxxb6" Jan 21 01:02:00 crc kubenswrapper[5118]: I0121 01:02:00.407477 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pr48x\" (UniqueName: \"kubernetes.io/projected/c7245025-2fca-484c-b55a-431b33de7097-kube-api-access-pr48x\") pod \"auto-csr-approver-29482622-pxxb6\" (UID: \"c7245025-2fca-484c-b55a-431b33de7097\") " pod="openshift-infra/auto-csr-approver-29482622-pxxb6" Jan 21 01:02:00 crc kubenswrapper[5118]: I0121 01:02:00.550663 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482622-pxxb6" Jan 21 01:02:00 crc kubenswrapper[5118]: I0121 01:02:00.771612 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29482622-pxxb6"] Jan 21 01:02:01 crc kubenswrapper[5118]: I0121 01:02:01.771638 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29482622-pxxb6" event={"ID":"c7245025-2fca-484c-b55a-431b33de7097","Type":"ContainerStarted","Data":"c8cb675892590c847a10ab948130d2ae34ebbbbc5a368c82db612e8ca2e90e5f"} Jan 21 01:02:03 crc kubenswrapper[5118]: I0121 01:02:03.793468 5118 generic.go:358] "Generic (PLEG): container finished" podID="c7245025-2fca-484c-b55a-431b33de7097" containerID="4539aac95230688625692e24566004005ca760e195c3821083cbeab5423afc15" exitCode=0 Jan 21 01:02:03 crc kubenswrapper[5118]: I0121 01:02:03.793658 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29482622-pxxb6" event={"ID":"c7245025-2fca-484c-b55a-431b33de7097","Type":"ContainerDied","Data":"4539aac95230688625692e24566004005ca760e195c3821083cbeab5423afc15"} Jan 21 01:02:03 crc kubenswrapper[5118]: I0121 01:02:03.801512 5118 patch_prober.go:28] interesting pod/machine-config-daemon-22r9n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 01:02:03 crc kubenswrapper[5118]: I0121 01:02:03.801586 5118 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 01:02:05 crc kubenswrapper[5118]: I0121 01:02:05.174866 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482622-pxxb6" Jan 21 01:02:05 crc kubenswrapper[5118]: I0121 01:02:05.263480 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pr48x\" (UniqueName: \"kubernetes.io/projected/c7245025-2fca-484c-b55a-431b33de7097-kube-api-access-pr48x\") pod \"c7245025-2fca-484c-b55a-431b33de7097\" (UID: \"c7245025-2fca-484c-b55a-431b33de7097\") " Jan 21 01:02:05 crc kubenswrapper[5118]: I0121 01:02:05.269981 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7245025-2fca-484c-b55a-431b33de7097-kube-api-access-pr48x" (OuterVolumeSpecName: "kube-api-access-pr48x") pod "c7245025-2fca-484c-b55a-431b33de7097" (UID: "c7245025-2fca-484c-b55a-431b33de7097"). InnerVolumeSpecName "kube-api-access-pr48x". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 01:02:05 crc kubenswrapper[5118]: I0121 01:02:05.365225 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pr48x\" (UniqueName: \"kubernetes.io/projected/c7245025-2fca-484c-b55a-431b33de7097-kube-api-access-pr48x\") on node \"crc\" DevicePath \"\"" Jan 21 01:02:05 crc kubenswrapper[5118]: I0121 01:02:05.812424 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29482622-pxxb6" event={"ID":"c7245025-2fca-484c-b55a-431b33de7097","Type":"ContainerDied","Data":"c8cb675892590c847a10ab948130d2ae34ebbbbc5a368c82db612e8ca2e90e5f"} Jan 21 01:02:05 crc kubenswrapper[5118]: I0121 01:02:05.812475 5118 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c8cb675892590c847a10ab948130d2ae34ebbbbc5a368c82db612e8ca2e90e5f" Jan 21 01:02:05 crc kubenswrapper[5118]: I0121 01:02:05.812550 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482622-pxxb6" Jan 21 01:02:06 crc kubenswrapper[5118]: I0121 01:02:06.255675 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29482616-qvvct"] Jan 21 01:02:06 crc kubenswrapper[5118]: I0121 01:02:06.261867 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29482616-qvvct"] Jan 21 01:02:06 crc kubenswrapper[5118]: I0121 01:02:06.989825 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60b8e072-9e63-4258-a607-54ff40b13e88" path="/var/lib/kubelet/pods/60b8e072-9e63-4258-a607-54ff40b13e88/volumes" Jan 21 01:02:09 crc kubenswrapper[5118]: I0121 01:02:09.463966 5118 scope.go:117] "RemoveContainer" containerID="9d37d3004789fd98b02027ef8b340526355526ee7c76aa0be8730d4941557340" Jan 21 01:02:33 crc kubenswrapper[5118]: I0121 01:02:33.801545 5118 patch_prober.go:28] interesting pod/machine-config-daemon-22r9n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 01:02:33 crc kubenswrapper[5118]: I0121 01:02:33.802285 5118 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 01:02:33 crc kubenswrapper[5118]: I0121 01:02:33.802355 5118 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" Jan 21 01:02:33 crc kubenswrapper[5118]: I0121 01:02:33.803113 5118 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1cfa4853a1d05427b9738ec633779c50d43c028f2c146af21d68ecb205dd62a3"} pod="openshift-machine-config-operator/machine-config-daemon-22r9n" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 01:02:33 crc kubenswrapper[5118]: I0121 01:02:33.803247 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" containerName="machine-config-daemon" containerID="cri-o://1cfa4853a1d05427b9738ec633779c50d43c028f2c146af21d68ecb205dd62a3" gracePeriod=600 Jan 21 01:02:33 crc kubenswrapper[5118]: E0121 01:02:33.939016 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-22r9n_openshift-machine-config-operator(44eb9bc7-60a3-421c-bf5e-d1d9a5026435)\"" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" Jan 21 01:02:34 crc kubenswrapper[5118]: I0121 01:02:34.072687 5118 generic.go:358] "Generic (PLEG): container finished" podID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" containerID="1cfa4853a1d05427b9738ec633779c50d43c028f2c146af21d68ecb205dd62a3" exitCode=0 Jan 21 01:02:34 crc kubenswrapper[5118]: I0121 01:02:34.072888 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" event={"ID":"44eb9bc7-60a3-421c-bf5e-d1d9a5026435","Type":"ContainerDied","Data":"1cfa4853a1d05427b9738ec633779c50d43c028f2c146af21d68ecb205dd62a3"} Jan 21 01:02:34 crc kubenswrapper[5118]: I0121 01:02:34.072927 5118 scope.go:117] "RemoveContainer" containerID="65522b58c4e66c6b9d619159bfd79bf25c3edfc786d498146b82854125a4f813" Jan 21 01:02:34 crc kubenswrapper[5118]: I0121 01:02:34.073457 5118 scope.go:117] "RemoveContainer" containerID="1cfa4853a1d05427b9738ec633779c50d43c028f2c146af21d68ecb205dd62a3" Jan 21 01:02:34 crc kubenswrapper[5118]: E0121 01:02:34.073721 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-22r9n_openshift-machine-config-operator(44eb9bc7-60a3-421c-bf5e-d1d9a5026435)\"" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" Jan 21 01:02:46 crc kubenswrapper[5118]: I0121 01:02:46.975696 5118 scope.go:117] "RemoveContainer" containerID="1cfa4853a1d05427b9738ec633779c50d43c028f2c146af21d68ecb205dd62a3" Jan 21 01:02:46 crc kubenswrapper[5118]: E0121 01:02:46.976650 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-22r9n_openshift-machine-config-operator(44eb9bc7-60a3-421c-bf5e-d1d9a5026435)\"" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" Jan 21 01:03:01 crc kubenswrapper[5118]: I0121 01:03:01.975750 5118 scope.go:117] "RemoveContainer" containerID="1cfa4853a1d05427b9738ec633779c50d43c028f2c146af21d68ecb205dd62a3" Jan 21 01:03:01 crc kubenswrapper[5118]: E0121 01:03:01.976584 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-22r9n_openshift-machine-config-operator(44eb9bc7-60a3-421c-bf5e-d1d9a5026435)\"" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" Jan 21 01:03:13 crc kubenswrapper[5118]: I0121 01:03:13.976243 5118 scope.go:117] "RemoveContainer" containerID="1cfa4853a1d05427b9738ec633779c50d43c028f2c146af21d68ecb205dd62a3" Jan 21 01:03:13 crc kubenswrapper[5118]: E0121 01:03:13.977270 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-22r9n_openshift-machine-config-operator(44eb9bc7-60a3-421c-bf5e-d1d9a5026435)\"" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" Jan 21 01:03:24 crc kubenswrapper[5118]: I0121 01:03:24.984517 5118 scope.go:117] "RemoveContainer" containerID="1cfa4853a1d05427b9738ec633779c50d43c028f2c146af21d68ecb205dd62a3" Jan 21 01:03:24 crc kubenswrapper[5118]: E0121 01:03:24.985416 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-22r9n_openshift-machine-config-operator(44eb9bc7-60a3-421c-bf5e-d1d9a5026435)\"" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" Jan 21 01:03:37 crc kubenswrapper[5118]: I0121 01:03:37.975771 5118 scope.go:117] "RemoveContainer" containerID="1cfa4853a1d05427b9738ec633779c50d43c028f2c146af21d68ecb205dd62a3" Jan 21 01:03:37 crc kubenswrapper[5118]: E0121 01:03:37.977499 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-22r9n_openshift-machine-config-operator(44eb9bc7-60a3-421c-bf5e-d1d9a5026435)\"" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" Jan 21 01:03:49 crc kubenswrapper[5118]: I0121 01:03:49.978205 5118 scope.go:117] "RemoveContainer" containerID="1cfa4853a1d05427b9738ec633779c50d43c028f2c146af21d68ecb205dd62a3" Jan 21 01:03:49 crc kubenswrapper[5118]: E0121 01:03:49.983308 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-22r9n_openshift-machine-config-operator(44eb9bc7-60a3-421c-bf5e-d1d9a5026435)\"" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" Jan 21 01:04:00 crc kubenswrapper[5118]: I0121 01:04:00.131821 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29482624-cr6x2"] Jan 21 01:04:00 crc kubenswrapper[5118]: I0121 01:04:00.134140 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c7245025-2fca-484c-b55a-431b33de7097" containerName="oc" Jan 21 01:04:00 crc kubenswrapper[5118]: I0121 01:04:00.134180 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7245025-2fca-484c-b55a-431b33de7097" containerName="oc" Jan 21 01:04:00 crc kubenswrapper[5118]: I0121 01:04:00.134368 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="c7245025-2fca-484c-b55a-431b33de7097" containerName="oc" Jan 21 01:04:00 crc kubenswrapper[5118]: I0121 01:04:00.143459 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29482624-cr6x2"] Jan 21 01:04:00 crc kubenswrapper[5118]: I0121 01:04:00.143560 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482624-cr6x2" Jan 21 01:04:00 crc kubenswrapper[5118]: I0121 01:04:00.146994 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-tk7qc\"" Jan 21 01:04:00 crc kubenswrapper[5118]: I0121 01:04:00.148110 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 01:04:00 crc kubenswrapper[5118]: I0121 01:04:00.149571 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 01:04:00 crc kubenswrapper[5118]: I0121 01:04:00.237359 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pxn2\" (UniqueName: \"kubernetes.io/projected/e1ad47f3-ea14-474a-b12a-7c357dafacad-kube-api-access-5pxn2\") pod \"auto-csr-approver-29482624-cr6x2\" (UID: \"e1ad47f3-ea14-474a-b12a-7c357dafacad\") " pod="openshift-infra/auto-csr-approver-29482624-cr6x2" Jan 21 01:04:00 crc kubenswrapper[5118]: I0121 01:04:00.339050 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5pxn2\" (UniqueName: \"kubernetes.io/projected/e1ad47f3-ea14-474a-b12a-7c357dafacad-kube-api-access-5pxn2\") pod \"auto-csr-approver-29482624-cr6x2\" (UID: \"e1ad47f3-ea14-474a-b12a-7c357dafacad\") " pod="openshift-infra/auto-csr-approver-29482624-cr6x2" Jan 21 01:04:00 crc kubenswrapper[5118]: I0121 01:04:00.365043 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pxn2\" (UniqueName: \"kubernetes.io/projected/e1ad47f3-ea14-474a-b12a-7c357dafacad-kube-api-access-5pxn2\") pod \"auto-csr-approver-29482624-cr6x2\" (UID: \"e1ad47f3-ea14-474a-b12a-7c357dafacad\") " pod="openshift-infra/auto-csr-approver-29482624-cr6x2" Jan 21 01:04:00 crc kubenswrapper[5118]: I0121 01:04:00.471488 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482624-cr6x2" Jan 21 01:04:00 crc kubenswrapper[5118]: I0121 01:04:00.771865 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29482624-cr6x2"] Jan 21 01:04:00 crc kubenswrapper[5118]: W0121 01:04:00.781034 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode1ad47f3_ea14_474a_b12a_7c357dafacad.slice/crio-0d3418fcc790138e82eed7224b84f2750c9f7b41bbf64e769e02b6af465bd89a WatchSource:0}: Error finding container 0d3418fcc790138e82eed7224b84f2750c9f7b41bbf64e769e02b6af465bd89a: Status 404 returned error can't find the container with id 0d3418fcc790138e82eed7224b84f2750c9f7b41bbf64e769e02b6af465bd89a Jan 21 01:04:00 crc kubenswrapper[5118]: I0121 01:04:00.875116 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29482624-cr6x2" event={"ID":"e1ad47f3-ea14-474a-b12a-7c357dafacad","Type":"ContainerStarted","Data":"0d3418fcc790138e82eed7224b84f2750c9f7b41bbf64e769e02b6af465bd89a"} Jan 21 01:04:02 crc kubenswrapper[5118]: I0121 01:04:02.895752 5118 generic.go:358] "Generic (PLEG): container finished" podID="e1ad47f3-ea14-474a-b12a-7c357dafacad" containerID="41ac96f417cb3400f7d6fb5248587b5aeccf264c26917398399141ef69340d56" exitCode=0 Jan 21 01:04:02 crc kubenswrapper[5118]: I0121 01:04:02.895851 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29482624-cr6x2" event={"ID":"e1ad47f3-ea14-474a-b12a-7c357dafacad","Type":"ContainerDied","Data":"41ac96f417cb3400f7d6fb5248587b5aeccf264c26917398399141ef69340d56"} Jan 21 01:04:03 crc kubenswrapper[5118]: I0121 01:04:03.976089 5118 scope.go:117] "RemoveContainer" containerID="1cfa4853a1d05427b9738ec633779c50d43c028f2c146af21d68ecb205dd62a3" Jan 21 01:04:03 crc kubenswrapper[5118]: E0121 01:04:03.977083 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-22r9n_openshift-machine-config-operator(44eb9bc7-60a3-421c-bf5e-d1d9a5026435)\"" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" Jan 21 01:04:04 crc kubenswrapper[5118]: I0121 01:04:04.252344 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482624-cr6x2" Jan 21 01:04:04 crc kubenswrapper[5118]: I0121 01:04:04.419964 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5pxn2\" (UniqueName: \"kubernetes.io/projected/e1ad47f3-ea14-474a-b12a-7c357dafacad-kube-api-access-5pxn2\") pod \"e1ad47f3-ea14-474a-b12a-7c357dafacad\" (UID: \"e1ad47f3-ea14-474a-b12a-7c357dafacad\") " Jan 21 01:04:04 crc kubenswrapper[5118]: I0121 01:04:04.428477 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1ad47f3-ea14-474a-b12a-7c357dafacad-kube-api-access-5pxn2" (OuterVolumeSpecName: "kube-api-access-5pxn2") pod "e1ad47f3-ea14-474a-b12a-7c357dafacad" (UID: "e1ad47f3-ea14-474a-b12a-7c357dafacad"). InnerVolumeSpecName "kube-api-access-5pxn2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 01:04:04 crc kubenswrapper[5118]: I0121 01:04:04.525411 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5pxn2\" (UniqueName: \"kubernetes.io/projected/e1ad47f3-ea14-474a-b12a-7c357dafacad-kube-api-access-5pxn2\") on node \"crc\" DevicePath \"\"" Jan 21 01:04:04 crc kubenswrapper[5118]: I0121 01:04:04.917273 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29482624-cr6x2" event={"ID":"e1ad47f3-ea14-474a-b12a-7c357dafacad","Type":"ContainerDied","Data":"0d3418fcc790138e82eed7224b84f2750c9f7b41bbf64e769e02b6af465bd89a"} Jan 21 01:04:04 crc kubenswrapper[5118]: I0121 01:04:04.917328 5118 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d3418fcc790138e82eed7224b84f2750c9f7b41bbf64e769e02b6af465bd89a" Jan 21 01:04:04 crc kubenswrapper[5118]: I0121 01:04:04.917286 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482624-cr6x2" Jan 21 01:04:05 crc kubenswrapper[5118]: I0121 01:04:05.318477 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29482618-95pkp"] Jan 21 01:04:05 crc kubenswrapper[5118]: I0121 01:04:05.328865 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29482618-95pkp"] Jan 21 01:04:06 crc kubenswrapper[5118]: I0121 01:04:06.319433 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qcqwq_7c0390f5-26b4-4299-958c-acac058be619/kube-multus/0.log" Jan 21 01:04:06 crc kubenswrapper[5118]: I0121 01:04:06.340627 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 21 01:04:06 crc kubenswrapper[5118]: I0121 01:04:06.345535 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qcqwq_7c0390f5-26b4-4299-958c-acac058be619/kube-multus/0.log" Jan 21 01:04:06 crc kubenswrapper[5118]: I0121 01:04:06.358826 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 21 01:04:07 crc kubenswrapper[5118]: I0121 01:04:07.005332 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4f580dc-7513-4504-bf79-07c609dce4a0" path="/var/lib/kubelet/pods/e4f580dc-7513-4504-bf79-07c609dce4a0/volumes" Jan 21 01:04:09 crc kubenswrapper[5118]: I0121 01:04:09.615447 5118 scope.go:117] "RemoveContainer" containerID="6045a42860a3d1100bc1d512659d3e12bb0358f846fb34fcb56ed1a5147f86f4" Jan 21 01:04:18 crc kubenswrapper[5118]: I0121 01:04:18.979791 5118 scope.go:117] "RemoveContainer" containerID="1cfa4853a1d05427b9738ec633779c50d43c028f2c146af21d68ecb205dd62a3" Jan 21 01:04:18 crc kubenswrapper[5118]: E0121 01:04:18.983116 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-22r9n_openshift-machine-config-operator(44eb9bc7-60a3-421c-bf5e-d1d9a5026435)\"" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" Jan 21 01:04:31 crc kubenswrapper[5118]: I0121 01:04:31.976475 5118 scope.go:117] "RemoveContainer" containerID="1cfa4853a1d05427b9738ec633779c50d43c028f2c146af21d68ecb205dd62a3" Jan 21 01:04:31 crc kubenswrapper[5118]: E0121 01:04:31.977372 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-22r9n_openshift-machine-config-operator(44eb9bc7-60a3-421c-bf5e-d1d9a5026435)\"" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" Jan 21 01:04:44 crc kubenswrapper[5118]: I0121 01:04:44.985269 5118 scope.go:117] "RemoveContainer" containerID="1cfa4853a1d05427b9738ec633779c50d43c028f2c146af21d68ecb205dd62a3" Jan 21 01:04:44 crc kubenswrapper[5118]: E0121 01:04:44.986074 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-22r9n_openshift-machine-config-operator(44eb9bc7-60a3-421c-bf5e-d1d9a5026435)\"" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" Jan 21 01:04:58 crc kubenswrapper[5118]: I0121 01:04:58.975894 5118 scope.go:117] "RemoveContainer" containerID="1cfa4853a1d05427b9738ec633779c50d43c028f2c146af21d68ecb205dd62a3" Jan 21 01:04:58 crc kubenswrapper[5118]: E0121 01:04:58.978068 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-22r9n_openshift-machine-config-operator(44eb9bc7-60a3-421c-bf5e-d1d9a5026435)\"" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" Jan 21 01:05:12 crc kubenswrapper[5118]: I0121 01:05:12.977718 5118 scope.go:117] "RemoveContainer" containerID="1cfa4853a1d05427b9738ec633779c50d43c028f2c146af21d68ecb205dd62a3" Jan 21 01:05:12 crc kubenswrapper[5118]: E0121 01:05:12.979119 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-22r9n_openshift-machine-config-operator(44eb9bc7-60a3-421c-bf5e-d1d9a5026435)\"" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" Jan 21 01:05:27 crc kubenswrapper[5118]: I0121 01:05:27.976858 5118 scope.go:117] "RemoveContainer" containerID="1cfa4853a1d05427b9738ec633779c50d43c028f2c146af21d68ecb205dd62a3" Jan 21 01:05:27 crc kubenswrapper[5118]: E0121 01:05:27.978033 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-22r9n_openshift-machine-config-operator(44eb9bc7-60a3-421c-bf5e-d1d9a5026435)\"" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" Jan 21 01:05:38 crc kubenswrapper[5118]: I0121 01:05:38.976321 5118 scope.go:117] "RemoveContainer" containerID="1cfa4853a1d05427b9738ec633779c50d43c028f2c146af21d68ecb205dd62a3" Jan 21 01:05:38 crc kubenswrapper[5118]: E0121 01:05:38.977386 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-22r9n_openshift-machine-config-operator(44eb9bc7-60a3-421c-bf5e-d1d9a5026435)\"" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" Jan 21 01:05:51 crc kubenswrapper[5118]: I0121 01:05:51.935104 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-t5h9b"] Jan 21 01:05:51 crc kubenswrapper[5118]: I0121 01:05:51.936330 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e1ad47f3-ea14-474a-b12a-7c357dafacad" containerName="oc" Jan 21 01:05:51 crc kubenswrapper[5118]: I0121 01:05:51.936346 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1ad47f3-ea14-474a-b12a-7c357dafacad" containerName="oc" Jan 21 01:05:51 crc kubenswrapper[5118]: I0121 01:05:51.936497 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="e1ad47f3-ea14-474a-b12a-7c357dafacad" containerName="oc" Jan 21 01:05:51 crc kubenswrapper[5118]: I0121 01:05:51.947580 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t5h9b" Jan 21 01:05:51 crc kubenswrapper[5118]: I0121 01:05:51.955273 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t5h9b"] Jan 21 01:05:52 crc kubenswrapper[5118]: I0121 01:05:52.000467 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98930fec-2193-4425-94b3-5ec249907ebd-utilities\") pod \"redhat-operators-t5h9b\" (UID: \"98930fec-2193-4425-94b3-5ec249907ebd\") " pod="openshift-marketplace/redhat-operators-t5h9b" Jan 21 01:05:52 crc kubenswrapper[5118]: I0121 01:05:52.000556 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98930fec-2193-4425-94b3-5ec249907ebd-catalog-content\") pod \"redhat-operators-t5h9b\" (UID: \"98930fec-2193-4425-94b3-5ec249907ebd\") " pod="openshift-marketplace/redhat-operators-t5h9b" Jan 21 01:05:52 crc kubenswrapper[5118]: I0121 01:05:52.001186 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdjm5\" (UniqueName: \"kubernetes.io/projected/98930fec-2193-4425-94b3-5ec249907ebd-kube-api-access-fdjm5\") pod \"redhat-operators-t5h9b\" (UID: \"98930fec-2193-4425-94b3-5ec249907ebd\") " pod="openshift-marketplace/redhat-operators-t5h9b" Jan 21 01:05:52 crc kubenswrapper[5118]: I0121 01:05:52.102999 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98930fec-2193-4425-94b3-5ec249907ebd-utilities\") pod \"redhat-operators-t5h9b\" (UID: \"98930fec-2193-4425-94b3-5ec249907ebd\") " pod="openshift-marketplace/redhat-operators-t5h9b" Jan 21 01:05:52 crc kubenswrapper[5118]: I0121 01:05:52.103108 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98930fec-2193-4425-94b3-5ec249907ebd-catalog-content\") pod \"redhat-operators-t5h9b\" (UID: \"98930fec-2193-4425-94b3-5ec249907ebd\") " pod="openshift-marketplace/redhat-operators-t5h9b" Jan 21 01:05:52 crc kubenswrapper[5118]: I0121 01:05:52.103275 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fdjm5\" (UniqueName: \"kubernetes.io/projected/98930fec-2193-4425-94b3-5ec249907ebd-kube-api-access-fdjm5\") pod \"redhat-operators-t5h9b\" (UID: \"98930fec-2193-4425-94b3-5ec249907ebd\") " pod="openshift-marketplace/redhat-operators-t5h9b" Jan 21 01:05:52 crc kubenswrapper[5118]: I0121 01:05:52.104582 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98930fec-2193-4425-94b3-5ec249907ebd-utilities\") pod \"redhat-operators-t5h9b\" (UID: \"98930fec-2193-4425-94b3-5ec249907ebd\") " pod="openshift-marketplace/redhat-operators-t5h9b" Jan 21 01:05:52 crc kubenswrapper[5118]: I0121 01:05:52.105567 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98930fec-2193-4425-94b3-5ec249907ebd-catalog-content\") pod \"redhat-operators-t5h9b\" (UID: \"98930fec-2193-4425-94b3-5ec249907ebd\") " pod="openshift-marketplace/redhat-operators-t5h9b" Jan 21 01:05:52 crc kubenswrapper[5118]: I0121 01:05:52.124818 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdjm5\" (UniqueName: \"kubernetes.io/projected/98930fec-2193-4425-94b3-5ec249907ebd-kube-api-access-fdjm5\") pod \"redhat-operators-t5h9b\" (UID: \"98930fec-2193-4425-94b3-5ec249907ebd\") " pod="openshift-marketplace/redhat-operators-t5h9b" Jan 21 01:05:52 crc kubenswrapper[5118]: I0121 01:05:52.281522 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t5h9b" Jan 21 01:05:52 crc kubenswrapper[5118]: W0121 01:05:52.753592 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod98930fec_2193_4425_94b3_5ec249907ebd.slice/crio-99abbd34c1e1eb8099e02249ece401fabb5b4170d5fd02eef536a807db85f27b WatchSource:0}: Error finding container 99abbd34c1e1eb8099e02249ece401fabb5b4170d5fd02eef536a807db85f27b: Status 404 returned error can't find the container with id 99abbd34c1e1eb8099e02249ece401fabb5b4170d5fd02eef536a807db85f27b Jan 21 01:05:52 crc kubenswrapper[5118]: I0121 01:05:52.756675 5118 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 01:05:52 crc kubenswrapper[5118]: I0121 01:05:52.766879 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t5h9b"] Jan 21 01:05:52 crc kubenswrapper[5118]: I0121 01:05:52.851538 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t5h9b" event={"ID":"98930fec-2193-4425-94b3-5ec249907ebd","Type":"ContainerStarted","Data":"99abbd34c1e1eb8099e02249ece401fabb5b4170d5fd02eef536a807db85f27b"} Jan 21 01:05:52 crc kubenswrapper[5118]: I0121 01:05:52.976814 5118 scope.go:117] "RemoveContainer" containerID="1cfa4853a1d05427b9738ec633779c50d43c028f2c146af21d68ecb205dd62a3" Jan 21 01:05:52 crc kubenswrapper[5118]: E0121 01:05:52.977358 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-22r9n_openshift-machine-config-operator(44eb9bc7-60a3-421c-bf5e-d1d9a5026435)\"" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" Jan 21 01:05:53 crc kubenswrapper[5118]: I0121 01:05:53.862546 5118 generic.go:358] "Generic (PLEG): container finished" podID="98930fec-2193-4425-94b3-5ec249907ebd" containerID="b3a3ff7c3e29e18f64434fea92c5abab7ac7b128fb811de8ed30af8bd0d66f94" exitCode=0 Jan 21 01:05:53 crc kubenswrapper[5118]: I0121 01:05:53.862739 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t5h9b" event={"ID":"98930fec-2193-4425-94b3-5ec249907ebd","Type":"ContainerDied","Data":"b3a3ff7c3e29e18f64434fea92c5abab7ac7b128fb811de8ed30af8bd0d66f94"} Jan 21 01:05:55 crc kubenswrapper[5118]: I0121 01:05:55.884679 5118 generic.go:358] "Generic (PLEG): container finished" podID="98930fec-2193-4425-94b3-5ec249907ebd" containerID="d6be483be93aa37017ef58608780e6b87ca5142029ca7a849c47d0ee3e3130ce" exitCode=0 Jan 21 01:05:55 crc kubenswrapper[5118]: I0121 01:05:55.884780 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t5h9b" event={"ID":"98930fec-2193-4425-94b3-5ec249907ebd","Type":"ContainerDied","Data":"d6be483be93aa37017ef58608780e6b87ca5142029ca7a849c47d0ee3e3130ce"} Jan 21 01:05:56 crc kubenswrapper[5118]: I0121 01:05:56.894832 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t5h9b" event={"ID":"98930fec-2193-4425-94b3-5ec249907ebd","Type":"ContainerStarted","Data":"b7b7db639586f8ded09363504e005ab0b3d523ed067312b74d891e5dd12493a3"} Jan 21 01:05:56 crc kubenswrapper[5118]: I0121 01:05:56.925891 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-t5h9b" podStartSLOduration=4.857494818 podStartE2EDuration="5.925863121s" podCreationTimestamp="2026-01-21 01:05:51 +0000 UTC" firstStartedPulling="2026-01-21 01:05:53.86385563 +0000 UTC m=+3409.188102658" lastFinishedPulling="2026-01-21 01:05:54.932223933 +0000 UTC m=+3410.256470961" observedRunningTime="2026-01-21 01:05:56.916129852 +0000 UTC m=+3412.240376900" watchObservedRunningTime="2026-01-21 01:05:56.925863121 +0000 UTC m=+3412.250110149" Jan 21 01:06:00 crc kubenswrapper[5118]: I0121 01:06:00.153042 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29482626-hj4f8"] Jan 21 01:06:00 crc kubenswrapper[5118]: I0121 01:06:00.177109 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29482626-hj4f8"] Jan 21 01:06:00 crc kubenswrapper[5118]: I0121 01:06:00.177300 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482626-hj4f8" Jan 21 01:06:00 crc kubenswrapper[5118]: I0121 01:06:00.185668 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 01:06:00 crc kubenswrapper[5118]: I0121 01:06:00.186229 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 01:06:00 crc kubenswrapper[5118]: I0121 01:06:00.187291 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-tk7qc\"" Jan 21 01:06:00 crc kubenswrapper[5118]: I0121 01:06:00.329209 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjtm6\" (UniqueName: \"kubernetes.io/projected/823533f9-8a12-4911-bd76-f562834ef9a4-kube-api-access-zjtm6\") pod \"auto-csr-approver-29482626-hj4f8\" (UID: \"823533f9-8a12-4911-bd76-f562834ef9a4\") " pod="openshift-infra/auto-csr-approver-29482626-hj4f8" Jan 21 01:06:00 crc kubenswrapper[5118]: I0121 01:06:00.431326 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zjtm6\" (UniqueName: \"kubernetes.io/projected/823533f9-8a12-4911-bd76-f562834ef9a4-kube-api-access-zjtm6\") pod \"auto-csr-approver-29482626-hj4f8\" (UID: \"823533f9-8a12-4911-bd76-f562834ef9a4\") " pod="openshift-infra/auto-csr-approver-29482626-hj4f8" Jan 21 01:06:00 crc kubenswrapper[5118]: I0121 01:06:00.462356 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjtm6\" (UniqueName: \"kubernetes.io/projected/823533f9-8a12-4911-bd76-f562834ef9a4-kube-api-access-zjtm6\") pod \"auto-csr-approver-29482626-hj4f8\" (UID: \"823533f9-8a12-4911-bd76-f562834ef9a4\") " pod="openshift-infra/auto-csr-approver-29482626-hj4f8" Jan 21 01:06:00 crc kubenswrapper[5118]: I0121 01:06:00.511341 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482626-hj4f8" Jan 21 01:06:00 crc kubenswrapper[5118]: I0121 01:06:00.729490 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29482626-hj4f8"] Jan 21 01:06:00 crc kubenswrapper[5118]: I0121 01:06:00.924218 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29482626-hj4f8" event={"ID":"823533f9-8a12-4911-bd76-f562834ef9a4","Type":"ContainerStarted","Data":"a125d61c892e7214b6a5f1540dfa2bafe6aabbc587aca408505fb9346ab057c9"} Jan 21 01:06:02 crc kubenswrapper[5118]: I0121 01:06:02.293957 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-t5h9b" Jan 21 01:06:02 crc kubenswrapper[5118]: I0121 01:06:02.294243 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-t5h9b" Jan 21 01:06:02 crc kubenswrapper[5118]: I0121 01:06:02.348745 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-t5h9b" Jan 21 01:06:02 crc kubenswrapper[5118]: I0121 01:06:02.988469 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-t5h9b" Jan 21 01:06:03 crc kubenswrapper[5118]: I0121 01:06:03.976500 5118 scope.go:117] "RemoveContainer" containerID="1cfa4853a1d05427b9738ec633779c50d43c028f2c146af21d68ecb205dd62a3" Jan 21 01:06:03 crc kubenswrapper[5118]: E0121 01:06:03.976978 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-22r9n_openshift-machine-config-operator(44eb9bc7-60a3-421c-bf5e-d1d9a5026435)\"" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" Jan 21 01:06:05 crc kubenswrapper[5118]: I0121 01:06:05.917731 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t5h9b"] Jan 21 01:06:05 crc kubenswrapper[5118]: I0121 01:06:05.965247 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-t5h9b" podUID="98930fec-2193-4425-94b3-5ec249907ebd" containerName="registry-server" containerID="cri-o://b7b7db639586f8ded09363504e005ab0b3d523ed067312b74d891e5dd12493a3" gracePeriod=2 Jan 21 01:06:06 crc kubenswrapper[5118]: I0121 01:06:06.983750 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29482626-hj4f8" event={"ID":"823533f9-8a12-4911-bd76-f562834ef9a4","Type":"ContainerStarted","Data":"c8f239f55713d366df2a301438118dd6acf70d84b78f8de230ebcc9cd8cd75f6"} Jan 21 01:06:07 crc kubenswrapper[5118]: I0121 01:06:07.003666 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29482626-hj4f8" podStartSLOduration=1.896637811 podStartE2EDuration="7.003623713s" podCreationTimestamp="2026-01-21 01:06:00 +0000 UTC" firstStartedPulling="2026-01-21 01:06:00.735837417 +0000 UTC m=+3416.060084435" lastFinishedPulling="2026-01-21 01:06:05.842823319 +0000 UTC m=+3421.167070337" observedRunningTime="2026-01-21 01:06:06.999826066 +0000 UTC m=+3422.324073074" watchObservedRunningTime="2026-01-21 01:06:07.003623713 +0000 UTC m=+3422.327870771" Jan 21 01:06:07 crc kubenswrapper[5118]: I0121 01:06:07.991353 5118 generic.go:358] "Generic (PLEG): container finished" podID="823533f9-8a12-4911-bd76-f562834ef9a4" containerID="c8f239f55713d366df2a301438118dd6acf70d84b78f8de230ebcc9cd8cd75f6" exitCode=0 Jan 21 01:06:07 crc kubenswrapper[5118]: I0121 01:06:07.991430 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29482626-hj4f8" event={"ID":"823533f9-8a12-4911-bd76-f562834ef9a4","Type":"ContainerDied","Data":"c8f239f55713d366df2a301438118dd6acf70d84b78f8de230ebcc9cd8cd75f6"} Jan 21 01:06:07 crc kubenswrapper[5118]: I0121 01:06:07.996432 5118 generic.go:358] "Generic (PLEG): container finished" podID="98930fec-2193-4425-94b3-5ec249907ebd" containerID="b7b7db639586f8ded09363504e005ab0b3d523ed067312b74d891e5dd12493a3" exitCode=0 Jan 21 01:06:07 crc kubenswrapper[5118]: I0121 01:06:07.996498 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t5h9b" event={"ID":"98930fec-2193-4425-94b3-5ec249907ebd","Type":"ContainerDied","Data":"b7b7db639586f8ded09363504e005ab0b3d523ed067312b74d891e5dd12493a3"} Jan 21 01:06:08 crc kubenswrapper[5118]: I0121 01:06:08.292984 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t5h9b" Jan 21 01:06:08 crc kubenswrapper[5118]: I0121 01:06:08.369961 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98930fec-2193-4425-94b3-5ec249907ebd-catalog-content\") pod \"98930fec-2193-4425-94b3-5ec249907ebd\" (UID: \"98930fec-2193-4425-94b3-5ec249907ebd\") " Jan 21 01:06:08 crc kubenswrapper[5118]: I0121 01:06:08.370119 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fdjm5\" (UniqueName: \"kubernetes.io/projected/98930fec-2193-4425-94b3-5ec249907ebd-kube-api-access-fdjm5\") pod \"98930fec-2193-4425-94b3-5ec249907ebd\" (UID: \"98930fec-2193-4425-94b3-5ec249907ebd\") " Jan 21 01:06:08 crc kubenswrapper[5118]: I0121 01:06:08.370731 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98930fec-2193-4425-94b3-5ec249907ebd-utilities\") pod \"98930fec-2193-4425-94b3-5ec249907ebd\" (UID: \"98930fec-2193-4425-94b3-5ec249907ebd\") " Jan 21 01:06:08 crc kubenswrapper[5118]: I0121 01:06:08.373252 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98930fec-2193-4425-94b3-5ec249907ebd-utilities" (OuterVolumeSpecName: "utilities") pod "98930fec-2193-4425-94b3-5ec249907ebd" (UID: "98930fec-2193-4425-94b3-5ec249907ebd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 01:06:08 crc kubenswrapper[5118]: I0121 01:06:08.380503 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98930fec-2193-4425-94b3-5ec249907ebd-kube-api-access-fdjm5" (OuterVolumeSpecName: "kube-api-access-fdjm5") pod "98930fec-2193-4425-94b3-5ec249907ebd" (UID: "98930fec-2193-4425-94b3-5ec249907ebd"). InnerVolumeSpecName "kube-api-access-fdjm5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 01:06:08 crc kubenswrapper[5118]: I0121 01:06:08.472599 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fdjm5\" (UniqueName: \"kubernetes.io/projected/98930fec-2193-4425-94b3-5ec249907ebd-kube-api-access-fdjm5\") on node \"crc\" DevicePath \"\"" Jan 21 01:06:08 crc kubenswrapper[5118]: I0121 01:06:08.472657 5118 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98930fec-2193-4425-94b3-5ec249907ebd-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 01:06:08 crc kubenswrapper[5118]: I0121 01:06:08.477929 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98930fec-2193-4425-94b3-5ec249907ebd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "98930fec-2193-4425-94b3-5ec249907ebd" (UID: "98930fec-2193-4425-94b3-5ec249907ebd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 01:06:08 crc kubenswrapper[5118]: I0121 01:06:08.574067 5118 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98930fec-2193-4425-94b3-5ec249907ebd-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 01:06:09 crc kubenswrapper[5118]: I0121 01:06:09.008380 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t5h9b" Jan 21 01:06:09 crc kubenswrapper[5118]: I0121 01:06:09.009226 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t5h9b" event={"ID":"98930fec-2193-4425-94b3-5ec249907ebd","Type":"ContainerDied","Data":"99abbd34c1e1eb8099e02249ece401fabb5b4170d5fd02eef536a807db85f27b"} Jan 21 01:06:09 crc kubenswrapper[5118]: I0121 01:06:09.009310 5118 scope.go:117] "RemoveContainer" containerID="b7b7db639586f8ded09363504e005ab0b3d523ed067312b74d891e5dd12493a3" Jan 21 01:06:09 crc kubenswrapper[5118]: I0121 01:06:09.045724 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t5h9b"] Jan 21 01:06:09 crc kubenswrapper[5118]: I0121 01:06:09.052046 5118 scope.go:117] "RemoveContainer" containerID="d6be483be93aa37017ef58608780e6b87ca5142029ca7a849c47d0ee3e3130ce" Jan 21 01:06:09 crc kubenswrapper[5118]: I0121 01:06:09.052635 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-t5h9b"] Jan 21 01:06:09 crc kubenswrapper[5118]: I0121 01:06:09.087826 5118 scope.go:117] "RemoveContainer" containerID="b3a3ff7c3e29e18f64434fea92c5abab7ac7b128fb811de8ed30af8bd0d66f94" Jan 21 01:06:09 crc kubenswrapper[5118]: I0121 01:06:09.198829 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482626-hj4f8" Jan 21 01:06:09 crc kubenswrapper[5118]: I0121 01:06:09.284962 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zjtm6\" (UniqueName: \"kubernetes.io/projected/823533f9-8a12-4911-bd76-f562834ef9a4-kube-api-access-zjtm6\") pod \"823533f9-8a12-4911-bd76-f562834ef9a4\" (UID: \"823533f9-8a12-4911-bd76-f562834ef9a4\") " Jan 21 01:06:09 crc kubenswrapper[5118]: I0121 01:06:09.289052 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/823533f9-8a12-4911-bd76-f562834ef9a4-kube-api-access-zjtm6" (OuterVolumeSpecName: "kube-api-access-zjtm6") pod "823533f9-8a12-4911-bd76-f562834ef9a4" (UID: "823533f9-8a12-4911-bd76-f562834ef9a4"). InnerVolumeSpecName "kube-api-access-zjtm6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 01:06:09 crc kubenswrapper[5118]: I0121 01:06:09.403358 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zjtm6\" (UniqueName: \"kubernetes.io/projected/823533f9-8a12-4911-bd76-f562834ef9a4-kube-api-access-zjtm6\") on node \"crc\" DevicePath \"\"" Jan 21 01:06:10 crc kubenswrapper[5118]: I0121 01:06:10.019536 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482626-hj4f8" Jan 21 01:06:10 crc kubenswrapper[5118]: I0121 01:06:10.019573 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29482626-hj4f8" event={"ID":"823533f9-8a12-4911-bd76-f562834ef9a4","Type":"ContainerDied","Data":"a125d61c892e7214b6a5f1540dfa2bafe6aabbc587aca408505fb9346ab057c9"} Jan 21 01:06:10 crc kubenswrapper[5118]: I0121 01:06:10.020266 5118 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a125d61c892e7214b6a5f1540dfa2bafe6aabbc587aca408505fb9346ab057c9" Jan 21 01:06:10 crc kubenswrapper[5118]: I0121 01:06:10.073183 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29482620-tsrgt"] Jan 21 01:06:10 crc kubenswrapper[5118]: I0121 01:06:10.078366 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29482620-tsrgt"] Jan 21 01:06:10 crc kubenswrapper[5118]: I0121 01:06:10.990395 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87220d6d-9864-4c68-ad38-36297e615eaf" path="/var/lib/kubelet/pods/87220d6d-9864-4c68-ad38-36297e615eaf/volumes" Jan 21 01:06:10 crc kubenswrapper[5118]: I0121 01:06:10.991898 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98930fec-2193-4425-94b3-5ec249907ebd" path="/var/lib/kubelet/pods/98930fec-2193-4425-94b3-5ec249907ebd/volumes" Jan 21 01:06:18 crc kubenswrapper[5118]: I0121 01:06:18.981510 5118 scope.go:117] "RemoveContainer" containerID="1cfa4853a1d05427b9738ec633779c50d43c028f2c146af21d68ecb205dd62a3" Jan 21 01:06:18 crc kubenswrapper[5118]: E0121 01:06:18.982592 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-22r9n_openshift-machine-config-operator(44eb9bc7-60a3-421c-bf5e-d1d9a5026435)\"" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" Jan 21 01:06:29 crc kubenswrapper[5118]: I0121 01:06:29.977978 5118 scope.go:117] "RemoveContainer" containerID="1cfa4853a1d05427b9738ec633779c50d43c028f2c146af21d68ecb205dd62a3" Jan 21 01:06:29 crc kubenswrapper[5118]: E0121 01:06:29.979153 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-22r9n_openshift-machine-config-operator(44eb9bc7-60a3-421c-bf5e-d1d9a5026435)\"" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" Jan 21 01:06:44 crc kubenswrapper[5118]: I0121 01:06:44.991539 5118 scope.go:117] "RemoveContainer" containerID="1cfa4853a1d05427b9738ec633779c50d43c028f2c146af21d68ecb205dd62a3" Jan 21 01:06:44 crc kubenswrapper[5118]: E0121 01:06:44.992284 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-22r9n_openshift-machine-config-operator(44eb9bc7-60a3-421c-bf5e-d1d9a5026435)\"" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" Jan 21 01:06:55 crc kubenswrapper[5118]: I0121 01:06:55.978138 5118 scope.go:117] "RemoveContainer" containerID="1cfa4853a1d05427b9738ec633779c50d43c028f2c146af21d68ecb205dd62a3" Jan 21 01:06:55 crc kubenswrapper[5118]: E0121 01:06:55.979205 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-22r9n_openshift-machine-config-operator(44eb9bc7-60a3-421c-bf5e-d1d9a5026435)\"" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" Jan 21 01:07:07 crc kubenswrapper[5118]: I0121 01:07:07.975947 5118 scope.go:117] "RemoveContainer" containerID="1cfa4853a1d05427b9738ec633779c50d43c028f2c146af21d68ecb205dd62a3" Jan 21 01:07:07 crc kubenswrapper[5118]: E0121 01:07:07.976669 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-22r9n_openshift-machine-config-operator(44eb9bc7-60a3-421c-bf5e-d1d9a5026435)\"" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" Jan 21 01:07:09 crc kubenswrapper[5118]: I0121 01:07:09.804859 5118 scope.go:117] "RemoveContainer" containerID="eefd4482beaa802a6f72f822bba50fc71b003d3af6c528df379e23aa03640bc1" Jan 21 01:07:21 crc kubenswrapper[5118]: I0121 01:07:21.976363 5118 scope.go:117] "RemoveContainer" containerID="1cfa4853a1d05427b9738ec633779c50d43c028f2c146af21d68ecb205dd62a3" Jan 21 01:07:21 crc kubenswrapper[5118]: E0121 01:07:21.977594 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-22r9n_openshift-machine-config-operator(44eb9bc7-60a3-421c-bf5e-d1d9a5026435)\"" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" Jan 21 01:07:32 crc kubenswrapper[5118]: I0121 01:07:32.976716 5118 scope.go:117] "RemoveContainer" containerID="1cfa4853a1d05427b9738ec633779c50d43c028f2c146af21d68ecb205dd62a3" Jan 21 01:07:32 crc kubenswrapper[5118]: E0121 01:07:32.977644 5118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-22r9n_openshift-machine-config-operator(44eb9bc7-60a3-421c-bf5e-d1d9a5026435)\"" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" Jan 21 01:07:46 crc kubenswrapper[5118]: I0121 01:07:46.992020 5118 scope.go:117] "RemoveContainer" containerID="1cfa4853a1d05427b9738ec633779c50d43c028f2c146af21d68ecb205dd62a3" Jan 21 01:07:47 crc kubenswrapper[5118]: I0121 01:07:47.370918 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" event={"ID":"44eb9bc7-60a3-421c-bf5e-d1d9a5026435","Type":"ContainerStarted","Data":"ae0beab82a688f82402eab771210d0f66cc0d18af08ece040873012dc551e300"} Jan 21 01:08:00 crc kubenswrapper[5118]: I0121 01:08:00.165966 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29482628-d5jgs"] Jan 21 01:08:00 crc kubenswrapper[5118]: I0121 01:08:00.167465 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="98930fec-2193-4425-94b3-5ec249907ebd" containerName="extract-content" Jan 21 01:08:00 crc kubenswrapper[5118]: I0121 01:08:00.167489 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="98930fec-2193-4425-94b3-5ec249907ebd" containerName="extract-content" Jan 21 01:08:00 crc kubenswrapper[5118]: I0121 01:08:00.167534 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="98930fec-2193-4425-94b3-5ec249907ebd" containerName="extract-utilities" Jan 21 01:08:00 crc kubenswrapper[5118]: I0121 01:08:00.167544 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="98930fec-2193-4425-94b3-5ec249907ebd" containerName="extract-utilities" Jan 21 01:08:00 crc kubenswrapper[5118]: I0121 01:08:00.169537 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="823533f9-8a12-4911-bd76-f562834ef9a4" containerName="oc" Jan 21 01:08:00 crc kubenswrapper[5118]: I0121 01:08:00.169561 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="823533f9-8a12-4911-bd76-f562834ef9a4" containerName="oc" Jan 21 01:08:00 crc kubenswrapper[5118]: I0121 01:08:00.169607 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="98930fec-2193-4425-94b3-5ec249907ebd" containerName="registry-server" Jan 21 01:08:00 crc kubenswrapper[5118]: I0121 01:08:00.169618 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="98930fec-2193-4425-94b3-5ec249907ebd" containerName="registry-server" Jan 21 01:08:00 crc kubenswrapper[5118]: I0121 01:08:00.170416 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="823533f9-8a12-4911-bd76-f562834ef9a4" containerName="oc" Jan 21 01:08:00 crc kubenswrapper[5118]: I0121 01:08:00.170451 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="98930fec-2193-4425-94b3-5ec249907ebd" containerName="registry-server" Jan 21 01:08:00 crc kubenswrapper[5118]: I0121 01:08:00.177575 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29482628-d5jgs"] Jan 21 01:08:00 crc kubenswrapper[5118]: I0121 01:08:00.177770 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482628-d5jgs" Jan 21 01:08:00 crc kubenswrapper[5118]: I0121 01:08:00.180996 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-tk7qc\"" Jan 21 01:08:00 crc kubenswrapper[5118]: I0121 01:08:00.189529 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 01:08:00 crc kubenswrapper[5118]: I0121 01:08:00.189523 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 01:08:00 crc kubenswrapper[5118]: I0121 01:08:00.227225 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqtn2\" (UniqueName: \"kubernetes.io/projected/934aa2d6-8b57-432e-88d6-b330b6c20af5-kube-api-access-dqtn2\") pod \"auto-csr-approver-29482628-d5jgs\" (UID: \"934aa2d6-8b57-432e-88d6-b330b6c20af5\") " pod="openshift-infra/auto-csr-approver-29482628-d5jgs" Jan 21 01:08:00 crc kubenswrapper[5118]: I0121 01:08:00.329351 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dqtn2\" (UniqueName: \"kubernetes.io/projected/934aa2d6-8b57-432e-88d6-b330b6c20af5-kube-api-access-dqtn2\") pod \"auto-csr-approver-29482628-d5jgs\" (UID: \"934aa2d6-8b57-432e-88d6-b330b6c20af5\") " pod="openshift-infra/auto-csr-approver-29482628-d5jgs" Jan 21 01:08:00 crc kubenswrapper[5118]: I0121 01:08:00.367575 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqtn2\" (UniqueName: \"kubernetes.io/projected/934aa2d6-8b57-432e-88d6-b330b6c20af5-kube-api-access-dqtn2\") pod \"auto-csr-approver-29482628-d5jgs\" (UID: \"934aa2d6-8b57-432e-88d6-b330b6c20af5\") " pod="openshift-infra/auto-csr-approver-29482628-d5jgs" Jan 21 01:08:00 crc kubenswrapper[5118]: I0121 01:08:00.511678 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482628-d5jgs" Jan 21 01:08:00 crc kubenswrapper[5118]: I0121 01:08:00.786992 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29482628-d5jgs"] Jan 21 01:08:01 crc kubenswrapper[5118]: I0121 01:08:01.520475 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29482628-d5jgs" event={"ID":"934aa2d6-8b57-432e-88d6-b330b6c20af5","Type":"ContainerStarted","Data":"bb7c0fb7ef0fa2952f78437620cc84236481ae7909a8931b96a6bd2ff987c1a6"} Jan 21 01:08:02 crc kubenswrapper[5118]: I0121 01:08:02.534099 5118 generic.go:358] "Generic (PLEG): container finished" podID="934aa2d6-8b57-432e-88d6-b330b6c20af5" containerID="71c5b52ee89ee238712b74928263b3d08a34321387f5b904dfb3f038d83f9bf2" exitCode=0 Jan 21 01:08:02 crc kubenswrapper[5118]: I0121 01:08:02.534420 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29482628-d5jgs" event={"ID":"934aa2d6-8b57-432e-88d6-b330b6c20af5","Type":"ContainerDied","Data":"71c5b52ee89ee238712b74928263b3d08a34321387f5b904dfb3f038d83f9bf2"} Jan 21 01:08:03 crc kubenswrapper[5118]: I0121 01:08:03.920843 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482628-d5jgs" Jan 21 01:08:04 crc kubenswrapper[5118]: I0121 01:08:04.003921 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dqtn2\" (UniqueName: \"kubernetes.io/projected/934aa2d6-8b57-432e-88d6-b330b6c20af5-kube-api-access-dqtn2\") pod \"934aa2d6-8b57-432e-88d6-b330b6c20af5\" (UID: \"934aa2d6-8b57-432e-88d6-b330b6c20af5\") " Jan 21 01:08:04 crc kubenswrapper[5118]: I0121 01:08:04.014592 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/934aa2d6-8b57-432e-88d6-b330b6c20af5-kube-api-access-dqtn2" (OuterVolumeSpecName: "kube-api-access-dqtn2") pod "934aa2d6-8b57-432e-88d6-b330b6c20af5" (UID: "934aa2d6-8b57-432e-88d6-b330b6c20af5"). InnerVolumeSpecName "kube-api-access-dqtn2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 01:08:04 crc kubenswrapper[5118]: I0121 01:08:04.106086 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dqtn2\" (UniqueName: \"kubernetes.io/projected/934aa2d6-8b57-432e-88d6-b330b6c20af5-kube-api-access-dqtn2\") on node \"crc\" DevicePath \"\"" Jan 21 01:08:04 crc kubenswrapper[5118]: I0121 01:08:04.554311 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29482628-d5jgs" event={"ID":"934aa2d6-8b57-432e-88d6-b330b6c20af5","Type":"ContainerDied","Data":"bb7c0fb7ef0fa2952f78437620cc84236481ae7909a8931b96a6bd2ff987c1a6"} Jan 21 01:08:04 crc kubenswrapper[5118]: I0121 01:08:04.554360 5118 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bb7c0fb7ef0fa2952f78437620cc84236481ae7909a8931b96a6bd2ff987c1a6" Jan 21 01:08:04 crc kubenswrapper[5118]: I0121 01:08:04.554388 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482628-d5jgs" Jan 21 01:08:05 crc kubenswrapper[5118]: I0121 01:08:05.034624 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29482622-pxxb6"] Jan 21 01:08:05 crc kubenswrapper[5118]: I0121 01:08:05.040332 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29482622-pxxb6"] Jan 21 01:08:06 crc kubenswrapper[5118]: I0121 01:08:06.987348 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7245025-2fca-484c-b55a-431b33de7097" path="/var/lib/kubelet/pods/c7245025-2fca-484c-b55a-431b33de7097/volumes" Jan 21 01:08:09 crc kubenswrapper[5118]: I0121 01:08:09.947753 5118 scope.go:117] "RemoveContainer" containerID="4539aac95230688625692e24566004005ca760e195c3821083cbeab5423afc15" Jan 21 01:09:06 crc kubenswrapper[5118]: I0121 01:09:06.471483 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qcqwq_7c0390f5-26b4-4299-958c-acac058be619/kube-multus/0.log" Jan 21 01:09:06 crc kubenswrapper[5118]: I0121 01:09:06.488978 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qcqwq_7c0390f5-26b4-4299-958c-acac058be619/kube-multus/0.log" Jan 21 01:09:06 crc kubenswrapper[5118]: I0121 01:09:06.490788 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 21 01:09:06 crc kubenswrapper[5118]: I0121 01:09:06.505189 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 21 01:09:47 crc kubenswrapper[5118]: I0121 01:09:47.877313 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rxkq9"] Jan 21 01:09:47 crc kubenswrapper[5118]: I0121 01:09:47.879232 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="934aa2d6-8b57-432e-88d6-b330b6c20af5" containerName="oc" Jan 21 01:09:47 crc kubenswrapper[5118]: I0121 01:09:47.879257 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="934aa2d6-8b57-432e-88d6-b330b6c20af5" containerName="oc" Jan 21 01:09:47 crc kubenswrapper[5118]: I0121 01:09:47.879531 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="934aa2d6-8b57-432e-88d6-b330b6c20af5" containerName="oc" Jan 21 01:09:47 crc kubenswrapper[5118]: I0121 01:09:47.890730 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rxkq9" Jan 21 01:09:47 crc kubenswrapper[5118]: I0121 01:09:47.899876 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rxkq9"] Jan 21 01:09:47 crc kubenswrapper[5118]: I0121 01:09:47.967086 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c30bd5a5-ccb6-42e8-9c8d-ef639a59bd39-utilities\") pod \"community-operators-rxkq9\" (UID: \"c30bd5a5-ccb6-42e8-9c8d-ef639a59bd39\") " pod="openshift-marketplace/community-operators-rxkq9" Jan 21 01:09:47 crc kubenswrapper[5118]: I0121 01:09:47.967253 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c30bd5a5-ccb6-42e8-9c8d-ef639a59bd39-catalog-content\") pod \"community-operators-rxkq9\" (UID: \"c30bd5a5-ccb6-42e8-9c8d-ef639a59bd39\") " pod="openshift-marketplace/community-operators-rxkq9" Jan 21 01:09:47 crc kubenswrapper[5118]: I0121 01:09:47.967288 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkh4t\" (UniqueName: \"kubernetes.io/projected/c30bd5a5-ccb6-42e8-9c8d-ef639a59bd39-kube-api-access-hkh4t\") pod \"community-operators-rxkq9\" (UID: \"c30bd5a5-ccb6-42e8-9c8d-ef639a59bd39\") " pod="openshift-marketplace/community-operators-rxkq9" Jan 21 01:09:48 crc kubenswrapper[5118]: I0121 01:09:48.068958 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c30bd5a5-ccb6-42e8-9c8d-ef639a59bd39-utilities\") pod \"community-operators-rxkq9\" (UID: \"c30bd5a5-ccb6-42e8-9c8d-ef639a59bd39\") " pod="openshift-marketplace/community-operators-rxkq9" Jan 21 01:09:48 crc kubenswrapper[5118]: I0121 01:09:48.069443 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c30bd5a5-ccb6-42e8-9c8d-ef639a59bd39-catalog-content\") pod \"community-operators-rxkq9\" (UID: \"c30bd5a5-ccb6-42e8-9c8d-ef639a59bd39\") " pod="openshift-marketplace/community-operators-rxkq9" Jan 21 01:09:48 crc kubenswrapper[5118]: I0121 01:09:48.069484 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hkh4t\" (UniqueName: \"kubernetes.io/projected/c30bd5a5-ccb6-42e8-9c8d-ef639a59bd39-kube-api-access-hkh4t\") pod \"community-operators-rxkq9\" (UID: \"c30bd5a5-ccb6-42e8-9c8d-ef639a59bd39\") " pod="openshift-marketplace/community-operators-rxkq9" Jan 21 01:09:48 crc kubenswrapper[5118]: I0121 01:09:48.069637 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c30bd5a5-ccb6-42e8-9c8d-ef639a59bd39-utilities\") pod \"community-operators-rxkq9\" (UID: \"c30bd5a5-ccb6-42e8-9c8d-ef639a59bd39\") " pod="openshift-marketplace/community-operators-rxkq9" Jan 21 01:09:48 crc kubenswrapper[5118]: I0121 01:09:48.069750 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c30bd5a5-ccb6-42e8-9c8d-ef639a59bd39-catalog-content\") pod \"community-operators-rxkq9\" (UID: \"c30bd5a5-ccb6-42e8-9c8d-ef639a59bd39\") " pod="openshift-marketplace/community-operators-rxkq9" Jan 21 01:09:48 crc kubenswrapper[5118]: I0121 01:09:48.095649 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkh4t\" (UniqueName: \"kubernetes.io/projected/c30bd5a5-ccb6-42e8-9c8d-ef639a59bd39-kube-api-access-hkh4t\") pod \"community-operators-rxkq9\" (UID: \"c30bd5a5-ccb6-42e8-9c8d-ef639a59bd39\") " pod="openshift-marketplace/community-operators-rxkq9" Jan 21 01:09:48 crc kubenswrapper[5118]: I0121 01:09:48.216952 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rxkq9" Jan 21 01:09:48 crc kubenswrapper[5118]: I0121 01:09:48.508341 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rxkq9"] Jan 21 01:09:48 crc kubenswrapper[5118]: I0121 01:09:48.734514 5118 generic.go:358] "Generic (PLEG): container finished" podID="c30bd5a5-ccb6-42e8-9c8d-ef639a59bd39" containerID="41655bc0f16cdb745574114684edcfb2d0819e2501ac3b99b949973f73ae79ab" exitCode=0 Jan 21 01:09:48 crc kubenswrapper[5118]: I0121 01:09:48.734666 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rxkq9" event={"ID":"c30bd5a5-ccb6-42e8-9c8d-ef639a59bd39","Type":"ContainerDied","Data":"41655bc0f16cdb745574114684edcfb2d0819e2501ac3b99b949973f73ae79ab"} Jan 21 01:09:48 crc kubenswrapper[5118]: I0121 01:09:48.735034 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rxkq9" event={"ID":"c30bd5a5-ccb6-42e8-9c8d-ef639a59bd39","Type":"ContainerStarted","Data":"cc4fefffa0c4d65f42748de3b87d26d3858de4cf7a5635dbd298a2350a3ab6e2"} Jan 21 01:09:49 crc kubenswrapper[5118]: I0121 01:09:49.745746 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rxkq9" event={"ID":"c30bd5a5-ccb6-42e8-9c8d-ef639a59bd39","Type":"ContainerStarted","Data":"996b88a5fa47c5d5fc022919db6d8a6b6bad44c73501942dad3744a700097abf"} Jan 21 01:09:50 crc kubenswrapper[5118]: I0121 01:09:50.759225 5118 generic.go:358] "Generic (PLEG): container finished" podID="c30bd5a5-ccb6-42e8-9c8d-ef639a59bd39" containerID="996b88a5fa47c5d5fc022919db6d8a6b6bad44c73501942dad3744a700097abf" exitCode=0 Jan 21 01:09:50 crc kubenswrapper[5118]: I0121 01:09:50.759571 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rxkq9" event={"ID":"c30bd5a5-ccb6-42e8-9c8d-ef639a59bd39","Type":"ContainerDied","Data":"996b88a5fa47c5d5fc022919db6d8a6b6bad44c73501942dad3744a700097abf"} Jan 21 01:09:51 crc kubenswrapper[5118]: I0121 01:09:51.769402 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rxkq9" event={"ID":"c30bd5a5-ccb6-42e8-9c8d-ef639a59bd39","Type":"ContainerStarted","Data":"1ce2b12d457751129f9af8c485d4fa6fdf7c79f438b6dbf08c261111b37a8060"} Jan 21 01:09:51 crc kubenswrapper[5118]: I0121 01:09:51.798787 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rxkq9" podStartSLOduration=4.154927722 podStartE2EDuration="4.798770115s" podCreationTimestamp="2026-01-21 01:09:47 +0000 UTC" firstStartedPulling="2026-01-21 01:09:48.735682056 +0000 UTC m=+3644.059929064" lastFinishedPulling="2026-01-21 01:09:49.379524439 +0000 UTC m=+3644.703771457" observedRunningTime="2026-01-21 01:09:51.790037892 +0000 UTC m=+3647.114284920" watchObservedRunningTime="2026-01-21 01:09:51.798770115 +0000 UTC m=+3647.123017143" Jan 21 01:09:58 crc kubenswrapper[5118]: I0121 01:09:58.218659 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-rxkq9" Jan 21 01:09:58 crc kubenswrapper[5118]: I0121 01:09:58.219131 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rxkq9" Jan 21 01:09:58 crc kubenswrapper[5118]: I0121 01:09:58.293951 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rxkq9" Jan 21 01:09:58 crc kubenswrapper[5118]: I0121 01:09:58.907770 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rxkq9" Jan 21 01:09:58 crc kubenswrapper[5118]: I0121 01:09:58.995452 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rxkq9"] Jan 21 01:10:00 crc kubenswrapper[5118]: I0121 01:10:00.150316 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29482630-p66fz"] Jan 21 01:10:00 crc kubenswrapper[5118]: I0121 01:10:00.838576 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29482630-p66fz"] Jan 21 01:10:00 crc kubenswrapper[5118]: I0121 01:10:00.838767 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482630-p66fz" Jan 21 01:10:00 crc kubenswrapper[5118]: I0121 01:10:00.843324 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-tk7qc\"" Jan 21 01:10:00 crc kubenswrapper[5118]: I0121 01:10:00.843757 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 01:10:00 crc kubenswrapper[5118]: I0121 01:10:00.843987 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 01:10:00 crc kubenswrapper[5118]: I0121 01:10:00.847300 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c45vj\" (UniqueName: \"kubernetes.io/projected/881da72f-8e7a-43f5-835b-6bb90fbc476a-kube-api-access-c45vj\") pod \"auto-csr-approver-29482630-p66fz\" (UID: \"881da72f-8e7a-43f5-835b-6bb90fbc476a\") " pod="openshift-infra/auto-csr-approver-29482630-p66fz" Jan 21 01:10:00 crc kubenswrapper[5118]: I0121 01:10:00.866197 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rxkq9" podUID="c30bd5a5-ccb6-42e8-9c8d-ef639a59bd39" containerName="registry-server" containerID="cri-o://1ce2b12d457751129f9af8c485d4fa6fdf7c79f438b6dbf08c261111b37a8060" gracePeriod=2 Jan 21 01:10:00 crc kubenswrapper[5118]: I0121 01:10:00.948521 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-c45vj\" (UniqueName: \"kubernetes.io/projected/881da72f-8e7a-43f5-835b-6bb90fbc476a-kube-api-access-c45vj\") pod \"auto-csr-approver-29482630-p66fz\" (UID: \"881da72f-8e7a-43f5-835b-6bb90fbc476a\") " pod="openshift-infra/auto-csr-approver-29482630-p66fz" Jan 21 01:10:00 crc kubenswrapper[5118]: I0121 01:10:00.975774 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-c45vj\" (UniqueName: \"kubernetes.io/projected/881da72f-8e7a-43f5-835b-6bb90fbc476a-kube-api-access-c45vj\") pod \"auto-csr-approver-29482630-p66fz\" (UID: \"881da72f-8e7a-43f5-835b-6bb90fbc476a\") " pod="openshift-infra/auto-csr-approver-29482630-p66fz" Jan 21 01:10:01 crc kubenswrapper[5118]: I0121 01:10:01.179878 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482630-p66fz" Jan 21 01:10:01 crc kubenswrapper[5118]: I0121 01:10:01.306134 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rxkq9" Jan 21 01:10:01 crc kubenswrapper[5118]: I0121 01:10:01.354839 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c30bd5a5-ccb6-42e8-9c8d-ef639a59bd39-utilities\") pod \"c30bd5a5-ccb6-42e8-9c8d-ef639a59bd39\" (UID: \"c30bd5a5-ccb6-42e8-9c8d-ef639a59bd39\") " Jan 21 01:10:01 crc kubenswrapper[5118]: I0121 01:10:01.354946 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c30bd5a5-ccb6-42e8-9c8d-ef639a59bd39-catalog-content\") pod \"c30bd5a5-ccb6-42e8-9c8d-ef639a59bd39\" (UID: \"c30bd5a5-ccb6-42e8-9c8d-ef639a59bd39\") " Jan 21 01:10:01 crc kubenswrapper[5118]: I0121 01:10:01.354998 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hkh4t\" (UniqueName: \"kubernetes.io/projected/c30bd5a5-ccb6-42e8-9c8d-ef639a59bd39-kube-api-access-hkh4t\") pod \"c30bd5a5-ccb6-42e8-9c8d-ef639a59bd39\" (UID: \"c30bd5a5-ccb6-42e8-9c8d-ef639a59bd39\") " Jan 21 01:10:01 crc kubenswrapper[5118]: I0121 01:10:01.356678 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c30bd5a5-ccb6-42e8-9c8d-ef639a59bd39-utilities" (OuterVolumeSpecName: "utilities") pod "c30bd5a5-ccb6-42e8-9c8d-ef639a59bd39" (UID: "c30bd5a5-ccb6-42e8-9c8d-ef639a59bd39"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 01:10:01 crc kubenswrapper[5118]: I0121 01:10:01.361270 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c30bd5a5-ccb6-42e8-9c8d-ef639a59bd39-kube-api-access-hkh4t" (OuterVolumeSpecName: "kube-api-access-hkh4t") pod "c30bd5a5-ccb6-42e8-9c8d-ef639a59bd39" (UID: "c30bd5a5-ccb6-42e8-9c8d-ef639a59bd39"). InnerVolumeSpecName "kube-api-access-hkh4t". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 01:10:01 crc kubenswrapper[5118]: I0121 01:10:01.439699 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c30bd5a5-ccb6-42e8-9c8d-ef639a59bd39-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c30bd5a5-ccb6-42e8-9c8d-ef639a59bd39" (UID: "c30bd5a5-ccb6-42e8-9c8d-ef639a59bd39"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 01:10:01 crc kubenswrapper[5118]: I0121 01:10:01.456950 5118 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c30bd5a5-ccb6-42e8-9c8d-ef639a59bd39-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 01:10:01 crc kubenswrapper[5118]: I0121 01:10:01.456989 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hkh4t\" (UniqueName: \"kubernetes.io/projected/c30bd5a5-ccb6-42e8-9c8d-ef639a59bd39-kube-api-access-hkh4t\") on node \"crc\" DevicePath \"\"" Jan 21 01:10:01 crc kubenswrapper[5118]: I0121 01:10:01.457004 5118 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c30bd5a5-ccb6-42e8-9c8d-ef639a59bd39-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 01:10:01 crc kubenswrapper[5118]: I0121 01:10:01.648309 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29482630-p66fz"] Jan 21 01:10:01 crc kubenswrapper[5118]: W0121 01:10:01.661444 5118 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod881da72f_8e7a_43f5_835b_6bb90fbc476a.slice/crio-4f70249c0cb56711ea3ebcc5585cbc651c16c214910997f0cfe93ecd2a7cba35 WatchSource:0}: Error finding container 4f70249c0cb56711ea3ebcc5585cbc651c16c214910997f0cfe93ecd2a7cba35: Status 404 returned error can't find the container with id 4f70249c0cb56711ea3ebcc5585cbc651c16c214910997f0cfe93ecd2a7cba35 Jan 21 01:10:01 crc kubenswrapper[5118]: I0121 01:10:01.878137 5118 generic.go:358] "Generic (PLEG): container finished" podID="c30bd5a5-ccb6-42e8-9c8d-ef639a59bd39" containerID="1ce2b12d457751129f9af8c485d4fa6fdf7c79f438b6dbf08c261111b37a8060" exitCode=0 Jan 21 01:10:01 crc kubenswrapper[5118]: I0121 01:10:01.878358 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rxkq9" event={"ID":"c30bd5a5-ccb6-42e8-9c8d-ef639a59bd39","Type":"ContainerDied","Data":"1ce2b12d457751129f9af8c485d4fa6fdf7c79f438b6dbf08c261111b37a8060"} Jan 21 01:10:01 crc kubenswrapper[5118]: I0121 01:10:01.878397 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rxkq9" Jan 21 01:10:01 crc kubenswrapper[5118]: I0121 01:10:01.878423 5118 scope.go:117] "RemoveContainer" containerID="1ce2b12d457751129f9af8c485d4fa6fdf7c79f438b6dbf08c261111b37a8060" Jan 21 01:10:01 crc kubenswrapper[5118]: I0121 01:10:01.878404 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rxkq9" event={"ID":"c30bd5a5-ccb6-42e8-9c8d-ef639a59bd39","Type":"ContainerDied","Data":"cc4fefffa0c4d65f42748de3b87d26d3858de4cf7a5635dbd298a2350a3ab6e2"} Jan 21 01:10:01 crc kubenswrapper[5118]: I0121 01:10:01.884147 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29482630-p66fz" event={"ID":"881da72f-8e7a-43f5-835b-6bb90fbc476a","Type":"ContainerStarted","Data":"4f70249c0cb56711ea3ebcc5585cbc651c16c214910997f0cfe93ecd2a7cba35"} Jan 21 01:10:01 crc kubenswrapper[5118]: I0121 01:10:01.904129 5118 scope.go:117] "RemoveContainer" containerID="996b88a5fa47c5d5fc022919db6d8a6b6bad44c73501942dad3744a700097abf" Jan 21 01:10:01 crc kubenswrapper[5118]: I0121 01:10:01.934185 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rxkq9"] Jan 21 01:10:01 crc kubenswrapper[5118]: I0121 01:10:01.946710 5118 scope.go:117] "RemoveContainer" containerID="41655bc0f16cdb745574114684edcfb2d0819e2501ac3b99b949973f73ae79ab" Jan 21 01:10:01 crc kubenswrapper[5118]: I0121 01:10:01.948398 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rxkq9"] Jan 21 01:10:01 crc kubenswrapper[5118]: I0121 01:10:01.963625 5118 scope.go:117] "RemoveContainer" containerID="1ce2b12d457751129f9af8c485d4fa6fdf7c79f438b6dbf08c261111b37a8060" Jan 21 01:10:01 crc kubenswrapper[5118]: E0121 01:10:01.964134 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ce2b12d457751129f9af8c485d4fa6fdf7c79f438b6dbf08c261111b37a8060\": container with ID starting with 1ce2b12d457751129f9af8c485d4fa6fdf7c79f438b6dbf08c261111b37a8060 not found: ID does not exist" containerID="1ce2b12d457751129f9af8c485d4fa6fdf7c79f438b6dbf08c261111b37a8060" Jan 21 01:10:01 crc kubenswrapper[5118]: I0121 01:10:01.964185 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ce2b12d457751129f9af8c485d4fa6fdf7c79f438b6dbf08c261111b37a8060"} err="failed to get container status \"1ce2b12d457751129f9af8c485d4fa6fdf7c79f438b6dbf08c261111b37a8060\": rpc error: code = NotFound desc = could not find container \"1ce2b12d457751129f9af8c485d4fa6fdf7c79f438b6dbf08c261111b37a8060\": container with ID starting with 1ce2b12d457751129f9af8c485d4fa6fdf7c79f438b6dbf08c261111b37a8060 not found: ID does not exist" Jan 21 01:10:01 crc kubenswrapper[5118]: I0121 01:10:01.964212 5118 scope.go:117] "RemoveContainer" containerID="996b88a5fa47c5d5fc022919db6d8a6b6bad44c73501942dad3744a700097abf" Jan 21 01:10:01 crc kubenswrapper[5118]: E0121 01:10:01.964449 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"996b88a5fa47c5d5fc022919db6d8a6b6bad44c73501942dad3744a700097abf\": container with ID starting with 996b88a5fa47c5d5fc022919db6d8a6b6bad44c73501942dad3744a700097abf not found: ID does not exist" containerID="996b88a5fa47c5d5fc022919db6d8a6b6bad44c73501942dad3744a700097abf" Jan 21 01:10:01 crc kubenswrapper[5118]: I0121 01:10:01.964477 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"996b88a5fa47c5d5fc022919db6d8a6b6bad44c73501942dad3744a700097abf"} err="failed to get container status \"996b88a5fa47c5d5fc022919db6d8a6b6bad44c73501942dad3744a700097abf\": rpc error: code = NotFound desc = could not find container \"996b88a5fa47c5d5fc022919db6d8a6b6bad44c73501942dad3744a700097abf\": container with ID starting with 996b88a5fa47c5d5fc022919db6d8a6b6bad44c73501942dad3744a700097abf not found: ID does not exist" Jan 21 01:10:01 crc kubenswrapper[5118]: I0121 01:10:01.964494 5118 scope.go:117] "RemoveContainer" containerID="41655bc0f16cdb745574114684edcfb2d0819e2501ac3b99b949973f73ae79ab" Jan 21 01:10:01 crc kubenswrapper[5118]: E0121 01:10:01.964767 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41655bc0f16cdb745574114684edcfb2d0819e2501ac3b99b949973f73ae79ab\": container with ID starting with 41655bc0f16cdb745574114684edcfb2d0819e2501ac3b99b949973f73ae79ab not found: ID does not exist" containerID="41655bc0f16cdb745574114684edcfb2d0819e2501ac3b99b949973f73ae79ab" Jan 21 01:10:01 crc kubenswrapper[5118]: I0121 01:10:01.964791 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41655bc0f16cdb745574114684edcfb2d0819e2501ac3b99b949973f73ae79ab"} err="failed to get container status \"41655bc0f16cdb745574114684edcfb2d0819e2501ac3b99b949973f73ae79ab\": rpc error: code = NotFound desc = could not find container \"41655bc0f16cdb745574114684edcfb2d0819e2501ac3b99b949973f73ae79ab\": container with ID starting with 41655bc0f16cdb745574114684edcfb2d0819e2501ac3b99b949973f73ae79ab not found: ID does not exist" Jan 21 01:10:02 crc kubenswrapper[5118]: I0121 01:10:02.987141 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c30bd5a5-ccb6-42e8-9c8d-ef639a59bd39" path="/var/lib/kubelet/pods/c30bd5a5-ccb6-42e8-9c8d-ef639a59bd39/volumes" Jan 21 01:10:03 crc kubenswrapper[5118]: I0121 01:10:03.800619 5118 patch_prober.go:28] interesting pod/machine-config-daemon-22r9n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 01:10:03 crc kubenswrapper[5118]: I0121 01:10:03.801016 5118 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 01:10:03 crc kubenswrapper[5118]: I0121 01:10:03.919120 5118 generic.go:358] "Generic (PLEG): container finished" podID="881da72f-8e7a-43f5-835b-6bb90fbc476a" containerID="19ae5c00d53324f682b5a52eacaf614ec13941cb44b417e2cf1392ab547b90c2" exitCode=0 Jan 21 01:10:03 crc kubenswrapper[5118]: I0121 01:10:03.919213 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29482630-p66fz" event={"ID":"881da72f-8e7a-43f5-835b-6bb90fbc476a","Type":"ContainerDied","Data":"19ae5c00d53324f682b5a52eacaf614ec13941cb44b417e2cf1392ab547b90c2"} Jan 21 01:10:05 crc kubenswrapper[5118]: I0121 01:10:05.273594 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482630-p66fz" Jan 21 01:10:05 crc kubenswrapper[5118]: I0121 01:10:05.320459 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c45vj\" (UniqueName: \"kubernetes.io/projected/881da72f-8e7a-43f5-835b-6bb90fbc476a-kube-api-access-c45vj\") pod \"881da72f-8e7a-43f5-835b-6bb90fbc476a\" (UID: \"881da72f-8e7a-43f5-835b-6bb90fbc476a\") " Jan 21 01:10:05 crc kubenswrapper[5118]: I0121 01:10:05.326848 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/881da72f-8e7a-43f5-835b-6bb90fbc476a-kube-api-access-c45vj" (OuterVolumeSpecName: "kube-api-access-c45vj") pod "881da72f-8e7a-43f5-835b-6bb90fbc476a" (UID: "881da72f-8e7a-43f5-835b-6bb90fbc476a"). InnerVolumeSpecName "kube-api-access-c45vj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 01:10:05 crc kubenswrapper[5118]: I0121 01:10:05.423046 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-c45vj\" (UniqueName: \"kubernetes.io/projected/881da72f-8e7a-43f5-835b-6bb90fbc476a-kube-api-access-c45vj\") on node \"crc\" DevicePath \"\"" Jan 21 01:10:05 crc kubenswrapper[5118]: I0121 01:10:05.942927 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29482630-p66fz" event={"ID":"881da72f-8e7a-43f5-835b-6bb90fbc476a","Type":"ContainerDied","Data":"4f70249c0cb56711ea3ebcc5585cbc651c16c214910997f0cfe93ecd2a7cba35"} Jan 21 01:10:05 crc kubenswrapper[5118]: I0121 01:10:05.943002 5118 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f70249c0cb56711ea3ebcc5585cbc651c16c214910997f0cfe93ecd2a7cba35" Jan 21 01:10:05 crc kubenswrapper[5118]: I0121 01:10:05.943121 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482630-p66fz" Jan 21 01:10:06 crc kubenswrapper[5118]: I0121 01:10:06.361841 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29482624-cr6x2"] Jan 21 01:10:06 crc kubenswrapper[5118]: I0121 01:10:06.372185 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29482624-cr6x2"] Jan 21 01:10:06 crc kubenswrapper[5118]: I0121 01:10:06.987406 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1ad47f3-ea14-474a-b12a-7c357dafacad" path="/var/lib/kubelet/pods/e1ad47f3-ea14-474a-b12a-7c357dafacad/volumes" Jan 21 01:10:10 crc kubenswrapper[5118]: I0121 01:10:10.140622 5118 scope.go:117] "RemoveContainer" containerID="41ac96f417cb3400f7d6fb5248587b5aeccf264c26917398399141ef69340d56" Jan 21 01:10:33 crc kubenswrapper[5118]: I0121 01:10:33.801412 5118 patch_prober.go:28] interesting pod/machine-config-daemon-22r9n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 01:10:33 crc kubenswrapper[5118]: I0121 01:10:33.802127 5118 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 01:11:03 crc kubenswrapper[5118]: I0121 01:11:03.801175 5118 patch_prober.go:28] interesting pod/machine-config-daemon-22r9n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 01:11:03 crc kubenswrapper[5118]: I0121 01:11:03.801893 5118 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 01:11:03 crc kubenswrapper[5118]: I0121 01:11:03.801954 5118 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" Jan 21 01:11:03 crc kubenswrapper[5118]: I0121 01:11:03.802690 5118 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ae0beab82a688f82402eab771210d0f66cc0d18af08ece040873012dc551e300"} pod="openshift-machine-config-operator/machine-config-daemon-22r9n" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 01:11:03 crc kubenswrapper[5118]: I0121 01:11:03.802792 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" containerName="machine-config-daemon" containerID="cri-o://ae0beab82a688f82402eab771210d0f66cc0d18af08ece040873012dc551e300" gracePeriod=600 Jan 21 01:11:03 crc kubenswrapper[5118]: I0121 01:11:03.935059 5118 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 01:11:04 crc kubenswrapper[5118]: I0121 01:11:04.513475 5118 generic.go:358] "Generic (PLEG): container finished" podID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" containerID="ae0beab82a688f82402eab771210d0f66cc0d18af08ece040873012dc551e300" exitCode=0 Jan 21 01:11:04 crc kubenswrapper[5118]: I0121 01:11:04.513556 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" event={"ID":"44eb9bc7-60a3-421c-bf5e-d1d9a5026435","Type":"ContainerDied","Data":"ae0beab82a688f82402eab771210d0f66cc0d18af08ece040873012dc551e300"} Jan 21 01:11:04 crc kubenswrapper[5118]: I0121 01:11:04.513904 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" event={"ID":"44eb9bc7-60a3-421c-bf5e-d1d9a5026435","Type":"ContainerStarted","Data":"eb9bfdfc518d2c28513757d38a066fb6778edcf177d1e6f7f6cd95f0bd700c19"} Jan 21 01:11:04 crc kubenswrapper[5118]: I0121 01:11:04.513930 5118 scope.go:117] "RemoveContainer" containerID="1cfa4853a1d05427b9738ec633779c50d43c028f2c146af21d68ecb205dd62a3" Jan 21 01:11:54 crc kubenswrapper[5118]: I0121 01:11:54.531571 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-sn5jj"] Jan 21 01:11:54 crc kubenswrapper[5118]: I0121 01:11:54.540238 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c30bd5a5-ccb6-42e8-9c8d-ef639a59bd39" containerName="extract-utilities" Jan 21 01:11:54 crc kubenswrapper[5118]: I0121 01:11:54.540274 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="c30bd5a5-ccb6-42e8-9c8d-ef639a59bd39" containerName="extract-utilities" Jan 21 01:11:54 crc kubenswrapper[5118]: I0121 01:11:54.540294 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c30bd5a5-ccb6-42e8-9c8d-ef639a59bd39" containerName="registry-server" Jan 21 01:11:54 crc kubenswrapper[5118]: I0121 01:11:54.540304 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="c30bd5a5-ccb6-42e8-9c8d-ef639a59bd39" containerName="registry-server" Jan 21 01:11:54 crc kubenswrapper[5118]: I0121 01:11:54.540315 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="881da72f-8e7a-43f5-835b-6bb90fbc476a" containerName="oc" Jan 21 01:11:54 crc kubenswrapper[5118]: I0121 01:11:54.540323 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="881da72f-8e7a-43f5-835b-6bb90fbc476a" containerName="oc" Jan 21 01:11:54 crc kubenswrapper[5118]: I0121 01:11:54.540350 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c30bd5a5-ccb6-42e8-9c8d-ef639a59bd39" containerName="extract-content" Jan 21 01:11:54 crc kubenswrapper[5118]: I0121 01:11:54.540358 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="c30bd5a5-ccb6-42e8-9c8d-ef639a59bd39" containerName="extract-content" Jan 21 01:11:54 crc kubenswrapper[5118]: I0121 01:11:54.540567 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="c30bd5a5-ccb6-42e8-9c8d-ef639a59bd39" containerName="registry-server" Jan 21 01:11:54 crc kubenswrapper[5118]: I0121 01:11:54.540592 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="881da72f-8e7a-43f5-835b-6bb90fbc476a" containerName="oc" Jan 21 01:11:54 crc kubenswrapper[5118]: I0121 01:11:54.548493 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sn5jj" Jan 21 01:11:54 crc kubenswrapper[5118]: I0121 01:11:54.550848 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sn5jj"] Jan 21 01:11:54 crc kubenswrapper[5118]: I0121 01:11:54.666985 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3efaf6a5-e5ae-4c95-be59-c57db7f297c0-utilities\") pod \"certified-operators-sn5jj\" (UID: \"3efaf6a5-e5ae-4c95-be59-c57db7f297c0\") " pod="openshift-marketplace/certified-operators-sn5jj" Jan 21 01:11:54 crc kubenswrapper[5118]: I0121 01:11:54.667102 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nd86\" (UniqueName: \"kubernetes.io/projected/3efaf6a5-e5ae-4c95-be59-c57db7f297c0-kube-api-access-9nd86\") pod \"certified-operators-sn5jj\" (UID: \"3efaf6a5-e5ae-4c95-be59-c57db7f297c0\") " pod="openshift-marketplace/certified-operators-sn5jj" Jan 21 01:11:54 crc kubenswrapper[5118]: I0121 01:11:54.667187 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3efaf6a5-e5ae-4c95-be59-c57db7f297c0-catalog-content\") pod \"certified-operators-sn5jj\" (UID: \"3efaf6a5-e5ae-4c95-be59-c57db7f297c0\") " pod="openshift-marketplace/certified-operators-sn5jj" Jan 21 01:11:54 crc kubenswrapper[5118]: I0121 01:11:54.768645 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3efaf6a5-e5ae-4c95-be59-c57db7f297c0-utilities\") pod \"certified-operators-sn5jj\" (UID: \"3efaf6a5-e5ae-4c95-be59-c57db7f297c0\") " pod="openshift-marketplace/certified-operators-sn5jj" Jan 21 01:11:54 crc kubenswrapper[5118]: I0121 01:11:54.768743 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9nd86\" (UniqueName: \"kubernetes.io/projected/3efaf6a5-e5ae-4c95-be59-c57db7f297c0-kube-api-access-9nd86\") pod \"certified-operators-sn5jj\" (UID: \"3efaf6a5-e5ae-4c95-be59-c57db7f297c0\") " pod="openshift-marketplace/certified-operators-sn5jj" Jan 21 01:11:54 crc kubenswrapper[5118]: I0121 01:11:54.768789 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3efaf6a5-e5ae-4c95-be59-c57db7f297c0-catalog-content\") pod \"certified-operators-sn5jj\" (UID: \"3efaf6a5-e5ae-4c95-be59-c57db7f297c0\") " pod="openshift-marketplace/certified-operators-sn5jj" Jan 21 01:11:54 crc kubenswrapper[5118]: I0121 01:11:54.769218 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3efaf6a5-e5ae-4c95-be59-c57db7f297c0-utilities\") pod \"certified-operators-sn5jj\" (UID: \"3efaf6a5-e5ae-4c95-be59-c57db7f297c0\") " pod="openshift-marketplace/certified-operators-sn5jj" Jan 21 01:11:54 crc kubenswrapper[5118]: I0121 01:11:54.769238 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3efaf6a5-e5ae-4c95-be59-c57db7f297c0-catalog-content\") pod \"certified-operators-sn5jj\" (UID: \"3efaf6a5-e5ae-4c95-be59-c57db7f297c0\") " pod="openshift-marketplace/certified-operators-sn5jj" Jan 21 01:11:54 crc kubenswrapper[5118]: I0121 01:11:54.790680 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nd86\" (UniqueName: \"kubernetes.io/projected/3efaf6a5-e5ae-4c95-be59-c57db7f297c0-kube-api-access-9nd86\") pod \"certified-operators-sn5jj\" (UID: \"3efaf6a5-e5ae-4c95-be59-c57db7f297c0\") " pod="openshift-marketplace/certified-operators-sn5jj" Jan 21 01:11:54 crc kubenswrapper[5118]: I0121 01:11:54.874272 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sn5jj" Jan 21 01:11:55 crc kubenswrapper[5118]: I0121 01:11:55.123081 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sn5jj"] Jan 21 01:11:55 crc kubenswrapper[5118]: I0121 01:11:55.955819 5118 generic.go:358] "Generic (PLEG): container finished" podID="3efaf6a5-e5ae-4c95-be59-c57db7f297c0" containerID="d0eaabf3ec2f54f89bf7358d1834913b280d6262754b49829f0d4e0bd10b57e9" exitCode=0 Jan 21 01:11:55 crc kubenswrapper[5118]: I0121 01:11:55.955879 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sn5jj" event={"ID":"3efaf6a5-e5ae-4c95-be59-c57db7f297c0","Type":"ContainerDied","Data":"d0eaabf3ec2f54f89bf7358d1834913b280d6262754b49829f0d4e0bd10b57e9"} Jan 21 01:11:55 crc kubenswrapper[5118]: I0121 01:11:55.957869 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sn5jj" event={"ID":"3efaf6a5-e5ae-4c95-be59-c57db7f297c0","Type":"ContainerStarted","Data":"7c23aee19c86cbd767f5d6a5088a789f9d0451895048c467431c956e180a7944"} Jan 21 01:11:57 crc kubenswrapper[5118]: I0121 01:11:57.978261 5118 generic.go:358] "Generic (PLEG): container finished" podID="3efaf6a5-e5ae-4c95-be59-c57db7f297c0" containerID="ffae0cd101dabfdc81f8dec560c8c5528ac3c7b666bc26ad95d656243a098d25" exitCode=0 Jan 21 01:11:57 crc kubenswrapper[5118]: I0121 01:11:57.980117 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sn5jj" event={"ID":"3efaf6a5-e5ae-4c95-be59-c57db7f297c0","Type":"ContainerDied","Data":"ffae0cd101dabfdc81f8dec560c8c5528ac3c7b666bc26ad95d656243a098d25"} Jan 21 01:11:58 crc kubenswrapper[5118]: I0121 01:11:58.987312 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sn5jj" event={"ID":"3efaf6a5-e5ae-4c95-be59-c57db7f297c0","Type":"ContainerStarted","Data":"5f4feaf932127cf1985fd358b015d40d434456ad341025a4c56f0132bc111b78"} Jan 21 01:11:59 crc kubenswrapper[5118]: I0121 01:11:59.010697 5118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-sn5jj" podStartSLOduration=4.09094033 podStartE2EDuration="5.010662454s" podCreationTimestamp="2026-01-21 01:11:54 +0000 UTC" firstStartedPulling="2026-01-21 01:11:55.959557881 +0000 UTC m=+3771.283804949" lastFinishedPulling="2026-01-21 01:11:56.879280035 +0000 UTC m=+3772.203527073" observedRunningTime="2026-01-21 01:11:59.010469259 +0000 UTC m=+3774.334716267" watchObservedRunningTime="2026-01-21 01:11:59.010662454 +0000 UTC m=+3774.334909472" Jan 21 01:12:00 crc kubenswrapper[5118]: I0121 01:12:00.150677 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29482632-9qxpw"] Jan 21 01:12:00 crc kubenswrapper[5118]: I0121 01:12:00.160976 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482632-9qxpw" Jan 21 01:12:00 crc kubenswrapper[5118]: I0121 01:12:00.161216 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29482632-9qxpw"] Jan 21 01:12:00 crc kubenswrapper[5118]: I0121 01:12:00.163616 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 01:12:00 crc kubenswrapper[5118]: I0121 01:12:00.163868 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 01:12:00 crc kubenswrapper[5118]: I0121 01:12:00.164687 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-tk7qc\"" Jan 21 01:12:00 crc kubenswrapper[5118]: I0121 01:12:00.258109 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wb4r8\" (UniqueName: \"kubernetes.io/projected/de63e152-9371-4019-89b6-b1c77be577a9-kube-api-access-wb4r8\") pod \"auto-csr-approver-29482632-9qxpw\" (UID: \"de63e152-9371-4019-89b6-b1c77be577a9\") " pod="openshift-infra/auto-csr-approver-29482632-9qxpw" Jan 21 01:12:00 crc kubenswrapper[5118]: I0121 01:12:00.360047 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wb4r8\" (UniqueName: \"kubernetes.io/projected/de63e152-9371-4019-89b6-b1c77be577a9-kube-api-access-wb4r8\") pod \"auto-csr-approver-29482632-9qxpw\" (UID: \"de63e152-9371-4019-89b6-b1c77be577a9\") " pod="openshift-infra/auto-csr-approver-29482632-9qxpw" Jan 21 01:12:00 crc kubenswrapper[5118]: I0121 01:12:00.390747 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wb4r8\" (UniqueName: \"kubernetes.io/projected/de63e152-9371-4019-89b6-b1c77be577a9-kube-api-access-wb4r8\") pod \"auto-csr-approver-29482632-9qxpw\" (UID: \"de63e152-9371-4019-89b6-b1c77be577a9\") " pod="openshift-infra/auto-csr-approver-29482632-9qxpw" Jan 21 01:12:00 crc kubenswrapper[5118]: I0121 01:12:00.481269 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482632-9qxpw" Jan 21 01:12:00 crc kubenswrapper[5118]: I0121 01:12:00.731187 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29482632-9qxpw"] Jan 21 01:12:01 crc kubenswrapper[5118]: I0121 01:12:01.005401 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29482632-9qxpw" event={"ID":"de63e152-9371-4019-89b6-b1c77be577a9","Type":"ContainerStarted","Data":"db732e289f7f4238adac04d2de040f5642f82c7cbaa40ac41fbacb2fa13e5a4e"} Jan 21 01:12:03 crc kubenswrapper[5118]: I0121 01:12:03.032903 5118 generic.go:358] "Generic (PLEG): container finished" podID="de63e152-9371-4019-89b6-b1c77be577a9" containerID="9b45acb1b8173c1c91dc175aa6ec5fc0ba95cd1cb6bd9aa07f5832fc6ad16db5" exitCode=0 Jan 21 01:12:03 crc kubenswrapper[5118]: I0121 01:12:03.033033 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29482632-9qxpw" event={"ID":"de63e152-9371-4019-89b6-b1c77be577a9","Type":"ContainerDied","Data":"9b45acb1b8173c1c91dc175aa6ec5fc0ba95cd1cb6bd9aa07f5832fc6ad16db5"} Jan 21 01:12:04 crc kubenswrapper[5118]: I0121 01:12:04.271412 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482632-9qxpw" Jan 21 01:12:04 crc kubenswrapper[5118]: I0121 01:12:04.344220 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wb4r8\" (UniqueName: \"kubernetes.io/projected/de63e152-9371-4019-89b6-b1c77be577a9-kube-api-access-wb4r8\") pod \"de63e152-9371-4019-89b6-b1c77be577a9\" (UID: \"de63e152-9371-4019-89b6-b1c77be577a9\") " Jan 21 01:12:04 crc kubenswrapper[5118]: I0121 01:12:04.350994 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de63e152-9371-4019-89b6-b1c77be577a9-kube-api-access-wb4r8" (OuterVolumeSpecName: "kube-api-access-wb4r8") pod "de63e152-9371-4019-89b6-b1c77be577a9" (UID: "de63e152-9371-4019-89b6-b1c77be577a9"). InnerVolumeSpecName "kube-api-access-wb4r8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 01:12:04 crc kubenswrapper[5118]: I0121 01:12:04.446212 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wb4r8\" (UniqueName: \"kubernetes.io/projected/de63e152-9371-4019-89b6-b1c77be577a9-kube-api-access-wb4r8\") on node \"crc\" DevicePath \"\"" Jan 21 01:12:04 crc kubenswrapper[5118]: I0121 01:12:04.875207 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-sn5jj" Jan 21 01:12:04 crc kubenswrapper[5118]: I0121 01:12:04.876141 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-sn5jj" Jan 21 01:12:04 crc kubenswrapper[5118]: I0121 01:12:04.929870 5118 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-sn5jj" Jan 21 01:12:05 crc kubenswrapper[5118]: I0121 01:12:05.050222 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482632-9qxpw" Jan 21 01:12:05 crc kubenswrapper[5118]: I0121 01:12:05.050627 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29482632-9qxpw" event={"ID":"de63e152-9371-4019-89b6-b1c77be577a9","Type":"ContainerDied","Data":"db732e289f7f4238adac04d2de040f5642f82c7cbaa40ac41fbacb2fa13e5a4e"} Jan 21 01:12:05 crc kubenswrapper[5118]: I0121 01:12:05.050650 5118 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db732e289f7f4238adac04d2de040f5642f82c7cbaa40ac41fbacb2fa13e5a4e" Jan 21 01:12:05 crc kubenswrapper[5118]: I0121 01:12:05.099688 5118 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-sn5jj" Jan 21 01:12:05 crc kubenswrapper[5118]: I0121 01:12:05.184231 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sn5jj"] Jan 21 01:12:05 crc kubenswrapper[5118]: I0121 01:12:05.355032 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29482626-hj4f8"] Jan 21 01:12:05 crc kubenswrapper[5118]: I0121 01:12:05.363574 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29482626-hj4f8"] Jan 21 01:12:06 crc kubenswrapper[5118]: I0121 01:12:06.993151 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="823533f9-8a12-4911-bd76-f562834ef9a4" path="/var/lib/kubelet/pods/823533f9-8a12-4911-bd76-f562834ef9a4/volumes" Jan 21 01:12:07 crc kubenswrapper[5118]: I0121 01:12:07.071525 5118 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-sn5jj" podUID="3efaf6a5-e5ae-4c95-be59-c57db7f297c0" containerName="registry-server" containerID="cri-o://5f4feaf932127cf1985fd358b015d40d434456ad341025a4c56f0132bc111b78" gracePeriod=2 Jan 21 01:12:08 crc kubenswrapper[5118]: I0121 01:12:08.057221 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sn5jj" Jan 21 01:12:08 crc kubenswrapper[5118]: I0121 01:12:08.087952 5118 generic.go:358] "Generic (PLEG): container finished" podID="3efaf6a5-e5ae-4c95-be59-c57db7f297c0" containerID="5f4feaf932127cf1985fd358b015d40d434456ad341025a4c56f0132bc111b78" exitCode=0 Jan 21 01:12:08 crc kubenswrapper[5118]: I0121 01:12:08.088125 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sn5jj" event={"ID":"3efaf6a5-e5ae-4c95-be59-c57db7f297c0","Type":"ContainerDied","Data":"5f4feaf932127cf1985fd358b015d40d434456ad341025a4c56f0132bc111b78"} Jan 21 01:12:08 crc kubenswrapper[5118]: I0121 01:12:08.088197 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sn5jj" Jan 21 01:12:08 crc kubenswrapper[5118]: I0121 01:12:08.088232 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sn5jj" event={"ID":"3efaf6a5-e5ae-4c95-be59-c57db7f297c0","Type":"ContainerDied","Data":"7c23aee19c86cbd767f5d6a5088a789f9d0451895048c467431c956e180a7944"} Jan 21 01:12:08 crc kubenswrapper[5118]: I0121 01:12:08.088259 5118 scope.go:117] "RemoveContainer" containerID="5f4feaf932127cf1985fd358b015d40d434456ad341025a4c56f0132bc111b78" Jan 21 01:12:08 crc kubenswrapper[5118]: I0121 01:12:08.109277 5118 scope.go:117] "RemoveContainer" containerID="ffae0cd101dabfdc81f8dec560c8c5528ac3c7b666bc26ad95d656243a098d25" Jan 21 01:12:08 crc kubenswrapper[5118]: I0121 01:12:08.131098 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3efaf6a5-e5ae-4c95-be59-c57db7f297c0-utilities\") pod \"3efaf6a5-e5ae-4c95-be59-c57db7f297c0\" (UID: \"3efaf6a5-e5ae-4c95-be59-c57db7f297c0\") " Jan 21 01:12:08 crc kubenswrapper[5118]: I0121 01:12:08.131321 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3efaf6a5-e5ae-4c95-be59-c57db7f297c0-catalog-content\") pod \"3efaf6a5-e5ae-4c95-be59-c57db7f297c0\" (UID: \"3efaf6a5-e5ae-4c95-be59-c57db7f297c0\") " Jan 21 01:12:08 crc kubenswrapper[5118]: I0121 01:12:08.131365 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9nd86\" (UniqueName: \"kubernetes.io/projected/3efaf6a5-e5ae-4c95-be59-c57db7f297c0-kube-api-access-9nd86\") pod \"3efaf6a5-e5ae-4c95-be59-c57db7f297c0\" (UID: \"3efaf6a5-e5ae-4c95-be59-c57db7f297c0\") " Jan 21 01:12:08 crc kubenswrapper[5118]: I0121 01:12:08.133898 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3efaf6a5-e5ae-4c95-be59-c57db7f297c0-utilities" (OuterVolumeSpecName: "utilities") pod "3efaf6a5-e5ae-4c95-be59-c57db7f297c0" (UID: "3efaf6a5-e5ae-4c95-be59-c57db7f297c0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 01:12:08 crc kubenswrapper[5118]: I0121 01:12:08.138922 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3efaf6a5-e5ae-4c95-be59-c57db7f297c0-kube-api-access-9nd86" (OuterVolumeSpecName: "kube-api-access-9nd86") pod "3efaf6a5-e5ae-4c95-be59-c57db7f297c0" (UID: "3efaf6a5-e5ae-4c95-be59-c57db7f297c0"). InnerVolumeSpecName "kube-api-access-9nd86". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 01:12:08 crc kubenswrapper[5118]: I0121 01:12:08.141072 5118 scope.go:117] "RemoveContainer" containerID="d0eaabf3ec2f54f89bf7358d1834913b280d6262754b49829f0d4e0bd10b57e9" Jan 21 01:12:08 crc kubenswrapper[5118]: I0121 01:12:08.174740 5118 scope.go:117] "RemoveContainer" containerID="5f4feaf932127cf1985fd358b015d40d434456ad341025a4c56f0132bc111b78" Jan 21 01:12:08 crc kubenswrapper[5118]: E0121 01:12:08.175482 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f4feaf932127cf1985fd358b015d40d434456ad341025a4c56f0132bc111b78\": container with ID starting with 5f4feaf932127cf1985fd358b015d40d434456ad341025a4c56f0132bc111b78 not found: ID does not exist" containerID="5f4feaf932127cf1985fd358b015d40d434456ad341025a4c56f0132bc111b78" Jan 21 01:12:08 crc kubenswrapper[5118]: I0121 01:12:08.175566 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f4feaf932127cf1985fd358b015d40d434456ad341025a4c56f0132bc111b78"} err="failed to get container status \"5f4feaf932127cf1985fd358b015d40d434456ad341025a4c56f0132bc111b78\": rpc error: code = NotFound desc = could not find container \"5f4feaf932127cf1985fd358b015d40d434456ad341025a4c56f0132bc111b78\": container with ID starting with 5f4feaf932127cf1985fd358b015d40d434456ad341025a4c56f0132bc111b78 not found: ID does not exist" Jan 21 01:12:08 crc kubenswrapper[5118]: I0121 01:12:08.175618 5118 scope.go:117] "RemoveContainer" containerID="ffae0cd101dabfdc81f8dec560c8c5528ac3c7b666bc26ad95d656243a098d25" Jan 21 01:12:08 crc kubenswrapper[5118]: E0121 01:12:08.176097 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ffae0cd101dabfdc81f8dec560c8c5528ac3c7b666bc26ad95d656243a098d25\": container with ID starting with ffae0cd101dabfdc81f8dec560c8c5528ac3c7b666bc26ad95d656243a098d25 not found: ID does not exist" containerID="ffae0cd101dabfdc81f8dec560c8c5528ac3c7b666bc26ad95d656243a098d25" Jan 21 01:12:08 crc kubenswrapper[5118]: I0121 01:12:08.176141 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffae0cd101dabfdc81f8dec560c8c5528ac3c7b666bc26ad95d656243a098d25"} err="failed to get container status \"ffae0cd101dabfdc81f8dec560c8c5528ac3c7b666bc26ad95d656243a098d25\": rpc error: code = NotFound desc = could not find container \"ffae0cd101dabfdc81f8dec560c8c5528ac3c7b666bc26ad95d656243a098d25\": container with ID starting with ffae0cd101dabfdc81f8dec560c8c5528ac3c7b666bc26ad95d656243a098d25 not found: ID does not exist" Jan 21 01:12:08 crc kubenswrapper[5118]: I0121 01:12:08.176186 5118 scope.go:117] "RemoveContainer" containerID="d0eaabf3ec2f54f89bf7358d1834913b280d6262754b49829f0d4e0bd10b57e9" Jan 21 01:12:08 crc kubenswrapper[5118]: E0121 01:12:08.176615 5118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0eaabf3ec2f54f89bf7358d1834913b280d6262754b49829f0d4e0bd10b57e9\": container with ID starting with d0eaabf3ec2f54f89bf7358d1834913b280d6262754b49829f0d4e0bd10b57e9 not found: ID does not exist" containerID="d0eaabf3ec2f54f89bf7358d1834913b280d6262754b49829f0d4e0bd10b57e9" Jan 21 01:12:08 crc kubenswrapper[5118]: I0121 01:12:08.176646 5118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0eaabf3ec2f54f89bf7358d1834913b280d6262754b49829f0d4e0bd10b57e9"} err="failed to get container status \"d0eaabf3ec2f54f89bf7358d1834913b280d6262754b49829f0d4e0bd10b57e9\": rpc error: code = NotFound desc = could not find container \"d0eaabf3ec2f54f89bf7358d1834913b280d6262754b49829f0d4e0bd10b57e9\": container with ID starting with d0eaabf3ec2f54f89bf7358d1834913b280d6262754b49829f0d4e0bd10b57e9 not found: ID does not exist" Jan 21 01:12:08 crc kubenswrapper[5118]: I0121 01:12:08.179431 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3efaf6a5-e5ae-4c95-be59-c57db7f297c0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3efaf6a5-e5ae-4c95-be59-c57db7f297c0" (UID: "3efaf6a5-e5ae-4c95-be59-c57db7f297c0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 01:12:08 crc kubenswrapper[5118]: I0121 01:12:08.233000 5118 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3efaf6a5-e5ae-4c95-be59-c57db7f297c0-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 01:12:08 crc kubenswrapper[5118]: I0121 01:12:08.233032 5118 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3efaf6a5-e5ae-4c95-be59-c57db7f297c0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 01:12:08 crc kubenswrapper[5118]: I0121 01:12:08.233042 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9nd86\" (UniqueName: \"kubernetes.io/projected/3efaf6a5-e5ae-4c95-be59-c57db7f297c0-kube-api-access-9nd86\") on node \"crc\" DevicePath \"\"" Jan 21 01:12:08 crc kubenswrapper[5118]: I0121 01:12:08.432766 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sn5jj"] Jan 21 01:12:08 crc kubenswrapper[5118]: I0121 01:12:08.439391 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-sn5jj"] Jan 21 01:12:08 crc kubenswrapper[5118]: I0121 01:12:08.987893 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3efaf6a5-e5ae-4c95-be59-c57db7f297c0" path="/var/lib/kubelet/pods/3efaf6a5-e5ae-4c95-be59-c57db7f297c0/volumes" Jan 21 01:12:10 crc kubenswrapper[5118]: I0121 01:12:10.319756 5118 scope.go:117] "RemoveContainer" containerID="c8f239f55713d366df2a301438118dd6acf70d84b78f8de230ebcc9cd8cd75f6" Jan 21 01:13:33 crc kubenswrapper[5118]: I0121 01:13:33.801276 5118 patch_prober.go:28] interesting pod/machine-config-daemon-22r9n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 01:13:33 crc kubenswrapper[5118]: I0121 01:13:33.801950 5118 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 01:14:00 crc kubenswrapper[5118]: I0121 01:14:00.136764 5118 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29482634-rgq7c"] Jan 21 01:14:00 crc kubenswrapper[5118]: I0121 01:14:00.138149 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3efaf6a5-e5ae-4c95-be59-c57db7f297c0" containerName="extract-utilities" Jan 21 01:14:00 crc kubenswrapper[5118]: I0121 01:14:00.138182 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="3efaf6a5-e5ae-4c95-be59-c57db7f297c0" containerName="extract-utilities" Jan 21 01:14:00 crc kubenswrapper[5118]: I0121 01:14:00.138203 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3efaf6a5-e5ae-4c95-be59-c57db7f297c0" containerName="registry-server" Jan 21 01:14:00 crc kubenswrapper[5118]: I0121 01:14:00.138211 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="3efaf6a5-e5ae-4c95-be59-c57db7f297c0" containerName="registry-server" Jan 21 01:14:00 crc kubenswrapper[5118]: I0121 01:14:00.138238 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3efaf6a5-e5ae-4c95-be59-c57db7f297c0" containerName="extract-content" Jan 21 01:14:00 crc kubenswrapper[5118]: I0121 01:14:00.138246 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="3efaf6a5-e5ae-4c95-be59-c57db7f297c0" containerName="extract-content" Jan 21 01:14:00 crc kubenswrapper[5118]: I0121 01:14:00.138289 5118 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="de63e152-9371-4019-89b6-b1c77be577a9" containerName="oc" Jan 21 01:14:00 crc kubenswrapper[5118]: I0121 01:14:00.138298 5118 state_mem.go:107] "Deleted CPUSet assignment" podUID="de63e152-9371-4019-89b6-b1c77be577a9" containerName="oc" Jan 21 01:14:00 crc kubenswrapper[5118]: I0121 01:14:00.138437 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="3efaf6a5-e5ae-4c95-be59-c57db7f297c0" containerName="registry-server" Jan 21 01:14:00 crc kubenswrapper[5118]: I0121 01:14:00.138447 5118 memory_manager.go:356] "RemoveStaleState removing state" podUID="de63e152-9371-4019-89b6-b1c77be577a9" containerName="oc" Jan 21 01:14:00 crc kubenswrapper[5118]: I0121 01:14:00.156574 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482634-rgq7c" Jan 21 01:14:00 crc kubenswrapper[5118]: I0121 01:14:00.157332 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29482634-rgq7c"] Jan 21 01:14:00 crc kubenswrapper[5118]: I0121 01:14:00.159978 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 01:14:00 crc kubenswrapper[5118]: I0121 01:14:00.160227 5118 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 01:14:00 crc kubenswrapper[5118]: I0121 01:14:00.160464 5118 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-tk7qc\"" Jan 21 01:14:00 crc kubenswrapper[5118]: I0121 01:14:00.292337 5118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4l5mz\" (UniqueName: \"kubernetes.io/projected/7ab90ae1-171f-4cd3-b347-396c2c79ad4a-kube-api-access-4l5mz\") pod \"auto-csr-approver-29482634-rgq7c\" (UID: \"7ab90ae1-171f-4cd3-b347-396c2c79ad4a\") " pod="openshift-infra/auto-csr-approver-29482634-rgq7c" Jan 21 01:14:00 crc kubenswrapper[5118]: I0121 01:14:00.407405 5118 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4l5mz\" (UniqueName: \"kubernetes.io/projected/7ab90ae1-171f-4cd3-b347-396c2c79ad4a-kube-api-access-4l5mz\") pod \"auto-csr-approver-29482634-rgq7c\" (UID: \"7ab90ae1-171f-4cd3-b347-396c2c79ad4a\") " pod="openshift-infra/auto-csr-approver-29482634-rgq7c" Jan 21 01:14:00 crc kubenswrapper[5118]: I0121 01:14:00.428149 5118 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4l5mz\" (UniqueName: \"kubernetes.io/projected/7ab90ae1-171f-4cd3-b347-396c2c79ad4a-kube-api-access-4l5mz\") pod \"auto-csr-approver-29482634-rgq7c\" (UID: \"7ab90ae1-171f-4cd3-b347-396c2c79ad4a\") " pod="openshift-infra/auto-csr-approver-29482634-rgq7c" Jan 21 01:14:00 crc kubenswrapper[5118]: I0121 01:14:00.478224 5118 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482634-rgq7c" Jan 21 01:14:00 crc kubenswrapper[5118]: I0121 01:14:00.926265 5118 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29482634-rgq7c"] Jan 21 01:14:01 crc kubenswrapper[5118]: I0121 01:14:01.081206 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29482634-rgq7c" event={"ID":"7ab90ae1-171f-4cd3-b347-396c2c79ad4a","Type":"ContainerStarted","Data":"e29805736092942a6473b84b646149cd43498f03c6cb5c41f32dad85f2e6fac0"} Jan 21 01:14:03 crc kubenswrapper[5118]: I0121 01:14:03.096047 5118 generic.go:358] "Generic (PLEG): container finished" podID="7ab90ae1-171f-4cd3-b347-396c2c79ad4a" containerID="78fba018348d205c7658a123127703cc1013997da5347d62bcc50ab26d821a0b" exitCode=0 Jan 21 01:14:03 crc kubenswrapper[5118]: I0121 01:14:03.096125 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29482634-rgq7c" event={"ID":"7ab90ae1-171f-4cd3-b347-396c2c79ad4a","Type":"ContainerDied","Data":"78fba018348d205c7658a123127703cc1013997da5347d62bcc50ab26d821a0b"} Jan 21 01:14:03 crc kubenswrapper[5118]: I0121 01:14:03.801408 5118 patch_prober.go:28] interesting pod/machine-config-daemon-22r9n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 01:14:03 crc kubenswrapper[5118]: I0121 01:14:03.801520 5118 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-22r9n" podUID="44eb9bc7-60a3-421c-bf5e-d1d9a5026435" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 01:14:04 crc kubenswrapper[5118]: I0121 01:14:04.414966 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482634-rgq7c" Jan 21 01:14:04 crc kubenswrapper[5118]: I0121 01:14:04.487127 5118 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4l5mz\" (UniqueName: \"kubernetes.io/projected/7ab90ae1-171f-4cd3-b347-396c2c79ad4a-kube-api-access-4l5mz\") pod \"7ab90ae1-171f-4cd3-b347-396c2c79ad4a\" (UID: \"7ab90ae1-171f-4cd3-b347-396c2c79ad4a\") " Jan 21 01:14:04 crc kubenswrapper[5118]: I0121 01:14:04.495965 5118 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ab90ae1-171f-4cd3-b347-396c2c79ad4a-kube-api-access-4l5mz" (OuterVolumeSpecName: "kube-api-access-4l5mz") pod "7ab90ae1-171f-4cd3-b347-396c2c79ad4a" (UID: "7ab90ae1-171f-4cd3-b347-396c2c79ad4a"). InnerVolumeSpecName "kube-api-access-4l5mz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 01:14:04 crc kubenswrapper[5118]: I0121 01:14:04.588778 5118 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4l5mz\" (UniqueName: \"kubernetes.io/projected/7ab90ae1-171f-4cd3-b347-396c2c79ad4a-kube-api-access-4l5mz\") on node \"crc\" DevicePath \"\"" Jan 21 01:14:05 crc kubenswrapper[5118]: I0121 01:14:05.114510 5118 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29482634-rgq7c" event={"ID":"7ab90ae1-171f-4cd3-b347-396c2c79ad4a","Type":"ContainerDied","Data":"e29805736092942a6473b84b646149cd43498f03c6cb5c41f32dad85f2e6fac0"} Jan 21 01:14:05 crc kubenswrapper[5118]: I0121 01:14:05.114553 5118 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e29805736092942a6473b84b646149cd43498f03c6cb5c41f32dad85f2e6fac0" Jan 21 01:14:05 crc kubenswrapper[5118]: I0121 01:14:05.114631 5118 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29482634-rgq7c" Jan 21 01:14:05 crc kubenswrapper[5118]: I0121 01:14:05.488614 5118 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29482628-d5jgs"] Jan 21 01:14:05 crc kubenswrapper[5118]: I0121 01:14:05.496366 5118 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29482628-d5jgs"] Jan 21 01:14:06 crc kubenswrapper[5118]: I0121 01:14:06.580290 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qcqwq_7c0390f5-26b4-4299-958c-acac058be619/kube-multus/0.log" Jan 21 01:14:06 crc kubenswrapper[5118]: I0121 01:14:06.593846 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 21 01:14:06 crc kubenswrapper[5118]: I0121 01:14:06.602740 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qcqwq_7c0390f5-26b4-4299-958c-acac058be619/kube-multus/0.log" Jan 21 01:14:06 crc kubenswrapper[5118]: I0121 01:14:06.616388 5118 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 21 01:14:06 crc kubenswrapper[5118]: I0121 01:14:06.990508 5118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="934aa2d6-8b57-432e-88d6-b330b6c20af5" path="/var/lib/kubelet/pods/934aa2d6-8b57-432e-88d6-b330b6c20af5/volumes"